User story #4395
closedHow difficult would be to implement support for system manufacturer/model?
Added by Alex Tkachenko almost 11 years ago. Updated 7 months ago.
Description
This information is actually collected by the fusioninventory and is available as SMANUFACTURER/SMODEL.
However it does not make it to the LDAP and therefore not available for search criteria to build groups. Available fields (i.e. BIOS attrs) are not sufficiently selective.
In our case we have sun and supermicro servers which both have American Megatrends BIOS.
I may be mistaken, but it looks like SMODEL is actually loaded as BIOS Name. Unfortunately, given a sheer number of available Supermicro server models, building a proper search criteria becomes complicated. Besides it does not guarantee that some new model released in the future would be automatically included.
In all considered cases the SMANUFACTURER field clearly identifies the system manufacturer, either Supermicro or Sun Microsystems (well, the later one may now be also "Oracle Corporation", but this is still just two additional cases to consider).
Files
openvz-mothership.ocs (25.3 KB) openvz-mothership.ocs | Alex Tkachenko, 2014-02-21 03:18 | ||
openvz-container.ocs (2.98 KB) openvz-container.ocs | Alex Tkachenko, 2014-02-21 03:18 | ||
proxmox-server.ocs (26.1 KB) proxmox-server.ocs | Alex Tkachenko, 2014-02-21 03:18 | ||
proxmox-kvm-guest.ocs (9.66 KB) proxmox-kvm-guest.ocs | Alex Tkachenko, 2014-02-21 03:18 | ||
proxmox-openvz-container.ocs (3.7 KB) proxmox-openvz-container.ocs | Alex Tkachenko, 2014-02-21 03:18 |
Updated by Nicolas CHARLES almost 11 years ago
Hi Alex,
This is by itself not a really difficult task, we simply need to extend the information stored in LDAP, and extend the parser in ldap-inventory.
Before extending it, if you'd have a list of all the new values you'd need, it would help us fitting your needs
Updated by Alex Tkachenko almost 11 years ago
Well, if you asked - I went through the process of categorization (grouping) of my network population yesterday and was able to resolve practically everything with what is available right now. The system manufacturer is the only thing available through dmidecode (which I routinely used in the past) which I lack for now, and the importance of it is to determine what kind of firmware updates the system might need.
Another thing I had some problems with was to unambiguously detect the raid used on the system. Of course, you could look at the available controller's name but I found that they are quite inconsistent from vendor to vendor. Besides, I have several cases when more than one raid adapter is installed and actually neither is used (the system is on s/w raid). I have no idea on how to improve things here - maybe somebody else does - just trying to bring up the importance of the raid subsystem.
One more idea - would it be possible to use cfengine classes for categorization? If rudder server could collect them all from the reports and make them available for the rules search criteria - it will be wonderful. Maybe not at the first pass, but convergently speaking :) it should help.
The major problem I am having so far with categorization and policies in general is support for OpenVZ. It is somehow detected as virtual, but with absolutely no details, so I can not create a group for OpenVZ containers. These may need to use special exceptions in the policy as they rely on some host's services (i.e. ntp). But the OpenVZ host is even more important to know about before applying the policy, as the attempt to detect and maintain the process/service on this environment may mistaken container's processes for their own.
And by the way, the dmidecode, unconditionally pulled in by the rudder rpm dependencies, does not work on the containers, as the host ("mothership") denies container's access to certain kernel/memory areas.
I have also run into some issues with detecting network interfaces from within the policy - but this may be purely cfengine problem ignoring virtual interfaces by default.
The question is - are you guys aware of OpenVZ-related issues or in other words, is there any work towards ensuring proper support for it?
If needed, we may create a separate ticket for this - I would gladly provide any necessary information and perform necessary testing (bandwidth permitting). OpenVZ (or it's proxmox flavor) is quite a sizable population in our network (about 20%), so it is hard for me to ignore.
Updated by Nicolas CHARLES almost 11 years ago
- Status changed from New to Discussion
- Assignee set to Alex Tkachenko
Thank you for this very detailed answer.
I'll answer in the quote
Alex Tkachenko wrote:
Well, if you asked - I went through the process of categorization (grouping) of my network population yesterday and was able to resolve practically everything with what is available right now. The system manufacturer is the only thing available through dmidecode (which I routinely used in the past) which I lack for now, and the importance of it is to determine what kind of firmware updates the system might need.
ok, so it is really important.
Another thing I had some problems with was to unambiguously detect the raid used on the system. Of course, you could look at the available controller's name but I found that they are quite inconsistent from vendor to vendor. Besides, I have several cases when more than one raid adapter is installed and actually neither is used (the system is on s/w raid). I have no idea on how to improve things here - maybe somebody else does - just trying to bring up the importance of the raid subsystem.
Ha, interesting point.
Do yo know if, at least, it is kinda properly set in the inventory as set by the node ? In the Hardware/Storage section on the Web Interface, there is a Manufacturer. Is it the manufacturer of Raid system ?
One more idea - would it be possible to use cfengine classes for categorization? If rudder server could collect them all from the reports and make them available for the rules search criteria - it will be wonderful. Maybe not at the first pass, but convergently speaking :) it should help.
This is a very good remark. Would you need all the classes defined in the run, or only hardclasses ?
Could you provide an example of use case for this ?
The major problem I am having so far with categorization and policies in general is support for OpenVZ. It is somehow detected as virtual, but with absolutely no details, so I can not create a group for OpenVZ containers. These may need to use special exceptions in the policy as they rely on some host's services (i.e. ntp). But the OpenVZ host is even more important to know about before applying the policy, as the attempt to detect and maintain the process/service on this environment may mistaken container's processes for their own.
Oh.
Fusion Inventory 2.2.2 (the version we use) should have support for Virtuozzo, and so should we.
Would it be possible for you to send us an inventory for a OpenVZ container and OpenVZ host ? (with the proper anonymification of information within, like cfengine key, hostnames, ip)
And by the way, the dmidecode, unconditionally pulled in by the rudder rpm dependencies, does not work on the containers, as the host ("mothership") denies container's access to certain kernel/memory areas.
I wasn't aware of this, sorry. Does it prevent completely the inventory generation ?
I have also run into some issues with detecting network interfaces from within the policy - but this may be purely cfengine problem ignoring virtual interfaces by default.
The question is - are you guys aware of OpenVZ-related issues or in other words, is there any work towards ensuring proper support for it?
We are, for the CFEngine part. We didn't realize there were inventory issues, and we are really sorry about this :(
If needed, we may create a separate ticket for this - I would gladly provide any necessary information and perform necessary testing (bandwidth permitting). OpenVZ (or it's proxmox flavor) is quite a sizable population in our network (about 20%), so it is hard for me to ignore.
Having two inventories for host and container would really help us fix the issues you stated
Thank you very very much for your time and patience
Updated by Alex Tkachenko almost 11 years ago
These are quite a few :)
I will have to answer the questions one at a time, as the amount of work required would result in the web session to time out :)
First the discrepancies between dmidecode, inventory and the web interface.
It is clear, that the inventory process maps BMANUFACTURER to both BIOS Editor and BIOS Version.
While using BMANUFACTURER for the Editor field is OK in most cases (except when AMIBIOS is used - i.e. on Sun and Supermicro),
the BIOS version has a direct field of its own in the inventory (BVERSION) - so why not using it?
dmidecode in all cases report things correctly.
The most definitive for the system manufacturer appears to be SMANUFACTURER/SMODEL,
which correspond to dmidecode -s sytem-manufacturer and demidecode -s system-product-name
Note, that Supermicro servers are completely missed - there is not a hint in the web interface referring to Supermicro
(although the inventory has it) so to make a special group for those one would have to regexp all the model names - which is error-prone and requires re-visiting each time the new model is introduced.
**** HP ProLiant BL465c (blade server): Hardware->BIOS (Name/Editor/Version): ProLiant BL465c G1 HP HP <BIOS> <ASSETTAG /> <BDATE>12/08/2009</BDATE> <BMANUFACTURER>HP</BMANUFACTURER> <BVERSION>A13</BVERSION> <MMANUFACTURER /> <MMODEL /> <MSN /> <SKUNUMBER>407234-B21</SKUNUMBER> <SMANUFACTURER>HP</SMANUFACTURER> <SMODEL>ProLiant BL465c G1</SMODEL> <SSN>USM71804MT</SSN> </BIOS> **** HP ProLiant DL385 G2 Hardware->BIOS (Name/Editor/Version): ProLiant DL385 G2 HP HP <BIOS> <ASSETTAG /> <BDATE>07/11/2009</BDATE> <BMANUFACTURER>HP</BMANUFACTURER> <BVERSION>A09</BVERSION> <MMANUFACTURER /> <MMODEL /> <MSN /> <SKUNUMBER>414109-B21</SKUNUMBER> <SMANUFACTURER>HP</SMANUFACTURER> <SMODEL>ProLiant DL385 G2</SMODEL> <SSN>USE741N5GZ</SSN> </BIOS> **** Dell PowerEdge 2950 Hardware->BIOS (Name/Editor/Version): PowerEdge 2950 Dell Inc. Dell Inc. <BIOS> <ASSETTAG /> <BDATE>10/30/2010</BDATE> <BMANUFACTURER>Dell Inc.</BMANUFACTURER> <BVERSION>2.7.0</BVERSION> <MMANUFACTURER>Dell Inc.</MMANUFACTURER> <MMODEL>0DP246</MMODEL> <MSN>..CN7082183E00LB.</MSN> <SKUNUMBER /> <SMANUFACTURER>Dell Inc.</SMANUFACTURER> <SMODEL>PowerEdge 2950</SMODEL> <SSN>FHFCYF1</SSN> </BIOS> **** Dell PowerEdge R610 Hardware->BIOS (Name/Editor/Version): PowerEdge R610 Dell Inc. Dell Inc. <BIOS> <ASSETTAG /> <BDATE>10/30/2009</BDATE> <BMANUFACTURER>Dell Inc.</BMANUFACTURER> <BVERSION>1.3.6</BVERSION> <MMANUFACTURER>Dell Inc.</MMANUFACTURER> <MMODEL>0XDN97</MMODEL> <MSN>..CN701639CD01EF.</MSN> <SKUNUMBER /> <SMANUFACTURER>Dell Inc.</SMANUFACTURER> <SMODEL>PowerEdge R610</SMODEL> <SSN>4RH2QL1</SSN> </BIOS> **** IBM System x3550 M4 Hardware->BIOS (Name/Editor/Version): IBM System x3550 M4 Server -[7914AC1]- IBM IBM <BIOS> <ASSETTAG>none</ASSETTAG> <BDATE>11/21/2012</BDATE> <BMANUFACTURER>IBM</BMANUFACTURER> <BVERSION>-[D7E124AUS-1.30]-</BVERSION> <MMANUFACTURER>IBM</MMANUFACTURER> <MMODEL>00J6242</MMODEL> <MSN>2AA01C</MSN> <SKUNUMBER /> <SMANUFACTURER>IBM</SMANUFACTURER> <SMODEL>IBM System x3550 M4 Server -[7914AC1]-</SMODEL> <SSN>KQ7C2B2</SSN> </BIOS> **** IBM System x3550 M3 Hardware->BIOS (Name/Editor/Version): System x3550 M3 -[7944AC1]- IBM Corp. IBM Corp. <BIOS> <ASSETTAG>none</ASSETTAG> <BDATE>02/02/2012</BDATE> <BMANUFACTURER>IBM Corp.</BMANUFACTURER> <BVERSION>-[D6E156BUS-1.14]-</BVERSION> <MMANUFACTURER>IBM</MMANUFACTURER> <MMODEL>00D4062</MMODEL> <MSN>23P0F4</MSN> <SKUNUMBER>XxXxXxX</SKUNUMBER> <SMANUFACTURER>IBM</SMANUFACTURER> <SMODEL>System x3550 M3 -[7944AC1]-</SMODEL> <SSN>KQ0Y792</SSN> </BIOS> **** SUN FIRE X4270 M2 Hardware->BIOS (Name/Editor/Version): SUN FIRE X4270 M2 SERVER American Megatrends Inc. American Megatrends Inc. <BIOS> <ASSETTAG /> <BDATE>05/23/2011</BDATE> <BMANUFACTURER>American Megatrends Inc.</BMANUFACTURER> <BVERSION>08080102</BVERSION> <MMANUFACTURER>Oracle Corporation</MMANUFACTURER> <MMODEL>ASSY,MOTHERBOARD,X4170</MMODEL> <MSN>0328MSL-1042BA0EB1</MSN> <SKUNUMBER>4715530-1</SKUNUMBER> <SMANUFACTURER>Oracle Corporation</SMANUFACTURER> <SMODEL>SUN FIRE X4270 M2 SERVER</SMODEL> <SSN>1043FMM147</SSN> </BIOS> **** Sun Fire X4240 Hardware->BIOS (Name/Editor/Version): Sun Fire X4240 American Megatrends Inc. American Megatrends Inc. <BIOS> <ASSETTAG /> <BDATE>10/26/2009</BDATE> <BMANUFACTURER>American Megatrends Inc.</BMANUFACTURER> <BVERSION>0ABMN068</BVERSION> <MMANUFACTURER>Sun Microsystems</MMANUFACTURER> <MMODEL>Sun Fire X4240</MMODEL> <MSN>2029QTF0913MD0LCA</MSN> <SKUNUMBER>602-4697-01</SKUNUMBER> <SMANUFACTURER>Sun Microsystems</SMANUFACTURER> <SMODEL>Sun Fire X4240</SMODEL> <SSN>0921QAS005</SSN> </BIOS> **** Supermicro X9DR7/E-(J)LN4F Hardware->BIOS (Name/Editor/Version): X9DR7/E-(J)LN4F American Megatrends Inc. American Megatrends Inc. <BIOS> <ASSETTAG>To Be Filled By O.E.M.</ASSETTAG> <BDATE>05/14/2013</BDATE> <BMANUFACTURER>American Megatrends Inc.</BMANUFACTURER> <BVERSION>1.0a</BVERSION> <MMANUFACTURER>Supermicro</MMANUFACTURER> <MMODEL>X9DR7/E-(J)LN4F</MMODEL> <MSN>UM25S30779</MSN> <SKUNUMBER>1234567890</SKUNUMBER> <SMANUFACTURER>Supermicro</SMANUFACTURER> <SMODEL>X9DR7/E-(J)LN4F</SMODEL> <SSN>1234567890</SSN> </BIOS> **** Supermicro H8DA8/H8DAR Hardware->BIOS (Name/Editor/Version): H8DA8/H8DAR American Megatrends Inc. American Megatrends Inc. <BIOS> <ASSETTAG>To Be Filled By O.E.M.</ASSETTAG> <BDATE>05/22/2006</BDATE> <BMANUFACTURER>American Megatrends Inc.</BMANUFACTURER> <BVERSION>080010</BVERSION> <MMANUFACTURER>Supermicro</MMANUFACTURER> <MMODEL>H8DA8</MMODEL> <MSN>1234567890</MSN> <SKUNUMBER /> <SMANUFACTURER>Supermicro</SMANUFACTURER> <SMODEL>H8DA8/H8DAR</SMODEL> <SSN>1234567890</SSN> </BIOS>
Updated by Alex Tkachenko almost 11 years ago
For the sake of completeness - here is one case where the system manufacturer looks weird (but I think this is the only case out of my 600+ servers):
- Sun Fire X4240
Hardware->BIOS (Name/Editor/Version): Sun Fire X4240 American Megatrends Inc. American Megatrends Inc.
<BIOS>
<ASSETTAG>Not Available</ASSETTAG>
<BDATE>10/26/2009</BDATE>
<BMANUFACTURER>American Megatrends Inc.</BMANUFACTURER>
<BVERSION>0ABMN068</BVERSION>
<MMANUFACTURER>Sun Microsystems</MMANUFACTURER>
<MMODEL>Sun Fire X4240</MMODEL>
<MSN>Not Available</MSN>
<SKUNUMBER>Not Available</SKUNUMBER>
<SMANUFACTURER>Not Available</SMANUFACTURER>
<SMODEL>Sun Fire X4240</SMODEL>
<SSN>Not Available</SSN>
</BIOS>
I do not know what is wrong with it, but if the SMANUFACTURER is not available, the M/B manufacturer or the BIOS manufacturer could be used as a last resort.
Updated by Alex Tkachenko almost 11 years ago
Now the RAID part. Below you will find different hardware raid controllers on different set of servers.
I was unable to identify a Linux Software RAID system based on the information collected during the inventory (aside from some processes i.e. md?_raid? which may or may not be running).
When building matching rules I was trying to be as generic as possible without compromising selectivity. In theory, Storages data should be used to make sure that not only the controller is installed, but the OS is actually using the devices provided by it. Unfortunately this is not always possible due to either syntax limitations (all the conditions are either ANDed or ORed, but nothing in between) or due to the fact that different vendors have different implementations (megacli is quite representative for this purpose).
First the most commonly present LSI MegaRAID SAS, which is now offered with Dell, IBM, Oracle, Supermicro, etc. These controllers could be managed with LSI MegaCLI utility.
Note - lspci detects MegaRAID SAS in all cases
The matching rule I've got so far is
Storages->Manufacturer = "LSI"
|| Storages->Model Regex "PERC H7800"
|| Controllers->Name Regex "MegaRAID SAS.*"
|| Storages->Model Regex "ServeRAID.*"
*** Supermicro X9DR7/E-(J)LN4F lspci | grep -i RAID output: 03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05) 81:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05) Hardware->Controllers: No raid controller entries Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 119 GB 3.24 LSI MR9271-8i 3600605b006961b30193306c52283aad8 disk 1 sda scsi 119 GB 3.24 LSI MR9271-8i 3600605b006961b30193306c52283aad8 disk 1 sdb scsi 60 TB 3.24 LSI MR9271-8i 3600605b006961b30193306c52283b042 disk 1 sdc scsi 60 TB 3.24 LSI MR9286CV-8e 3600605b005be3a60193307782cc4859a disk 1 sdd scsi 51.8 TB 3.24 LSI MR9286CV-8e 3600605b005be3a60193307782cc552c8 disk 1 *** Dell Inc. PowerEdge R610 lspci | grep -i RAID output: 03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 04) Hardware->Controllers: No raid controller entries Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 1.36 TB 2.0. DELL PERC H700 36a4badb00f006e00133ac06f070e0b4b disk 1 *** Dell Inc. PowerEdge R610 lspci | grep -i RAID output: 03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Hardware->Controllers: MegaRAID SAS 1078 LSI Logic / Symbios Logic RAID bus controller 1 Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 200 GB 1.22 DELL PERC 6/i 36782bcb02a72c700189b046b0d3045b0 disk 1 sdb scsi 2.08 TB 1.22 DELL PERC 6/i 36782bcb02a72c700189b04710d87d857 disk 1 *** Dell Inc. PowerEdge 2950 lspci | grep -i RAID output: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Hardware->Controllers: MegaRAID SAS 1078 LSI Logic / Symbios Logic RAID bus controller 1 Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 474 GB 1.22 DELL PERC 6/i 36001e4f02fb7d80015d5e2d208d25f45 disk 1 *** Oracle Corporation SUN FIRE X4270 M2 SERVER lspci | grep -i RAID output: 0d:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 05) Hardware->Controllers: No raid controller entries Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 278 GB 2.90 LSI MR9261-8i 3600605b002931cd00477f16b5fdbddb5 disk 1 sdb scsi 557 GB 2.90 LSI MR9261-8i 3600605b002931cd004781f5f09797c6e disk 1 *** IBM System x3550 M4 Server -[7914AC1]- lspci | grep -i RAID output: 1b:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05) Hardware->Controllers: No raid controller entries Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 278 GB 3.19 IBM ServeRAID M5110 3600605b0058885201857437214dd96fe disk 1 *** IBM System x3550 M3 -[7944AC1]- lspci | grep -i RAID output: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 05) Hardware->Controllers: No raid controller entries Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda SCSI 212 IBM ServeRAIDM5015 disk 1
An older LSI MegaRAID SATA (manageable with the LSI MegaRC utility.
The matching rule is Controllers->Name = "MegaRAID" && Storages->Manufacturer = "MegaRAID"
*** Dell Computer Corporation PowerEdge 1850 lspci | grep -i RAID output: 03:0b.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID (rev 01) Hardware->Controllers: MegaRAID LSI Logic / Symbios Logic RAID bus controller 1 Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda SCSI 352B MegaRAID LD0 RAID1 69G disk 1
Adaptec RAID controller - it used to be quite common, but unfortunately now I only have it in some Sun servers.
The matching criteria is Controllers->Manufacturer = "Adaptec"
NOTE, that the model reported for the storages below is actually the name of a raid device given during its creation, and could be practically anything.
*** Sun Microsystems Sun Fire X4240 lspci | grep -i RAID output: 04:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09) Hardware->Controllers: AAC-RAID Adaptec RAID bus controller 1 Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 140 GB V1.0 Sun SYSTEM SSun_SYSTEM_1662887B disk 1 sdb scsi 1.78 TB V1.0 Sun DATA SSun_DATA_EBAEA87B disk 1 *** Sun Microsystems Sun Fire X4240 lspci | grep -i RAID output: 04:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09) Hardware->Controllers: AAC-RAID Adaptec RAID bus controller 1 Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity sda scsi 140 GB V1.0 Sun SYSTEM SSun_SYSTEM_EE58950B disk 1 sdb scsi 1.78 TB V1.0 Sun DATA SSun_DATA_D130A50B disk 1
HP integrated Raid controller (Smart Array E200i)
Matching criteria is Controllers->Name Regex "Smart Array.*SAS Controller.*"
I would have also added Storages->Name Regex "cciss.*", but storages are filled in inconsistently.
*** HP ProLiant BL465c G1 (blade server) lspci | grep -i RAID output: 50:08.0 RAID bus controller: Hewlett-Packard Company Smart Array E200i (SAS Controller) Hardware->Controllers: Smart Array E200i (SAS Controller) Hewlett-Packard Company RAID bus controller 1 Hardware->Storages: No Storage entries present (The system has two drives combined into a mirror - /dev/cciss/c0d0) *** HP ProLiant DL385 G2 lspci | grep -i RAID output: 0c:08.0 RAID bus controller: Hewlett-Packard Company Smart Array E200i (SAS Controller) Hardware->Controllers: Smart Array E200i (SAS Controller) Hewlett-Packard Company RAID bus controller 1 Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity cciss/c0d0 IDE 68.3 GB 1.86? LOGICAL_VOLUME LOGICAL_VOLUME 3600508b100104c3953555a3237330004 disk 1 HP DG072ABAB3 SAS 70.3 GB HPDE Hewlett Packard HP DG072ABAB3 3NP1VTZX00009810KEVH disk 1 Again, two drives are combined into a single raid mirror. Don't know why the second entry (an individual drive) is shown, but in any case, the Inventory actually has two of them (with different serial numbers): <STORAGES> <DESCRIPTION>SAS</DESCRIPTION> <DISKSIZE>72000</DISKSIZE> <FIRMWARE>HPDE</FIRMWARE> <MANUFACTURER>Hewlett Packard</MANUFACTURER> <MODEL>HP DG072ABAB3 </MODEL> <NAME>HP DG072ABAB3 </NAME> <SERIALNUMBER>3NP1W9TA00009811NPMA</SERIALNUMBER> <TYPE>disk</TYPE> </STORAGES> <STORAGES> <DESCRIPTION>SAS</DESCRIPTION> <DISKSIZE>72000</DISKSIZE> <FIRMWARE>HPDE</FIRMWARE> <MANUFACTURER>Hewlett Packard</MANUFACTURER> <MODEL>HP DG072ABAB3 </MODEL> <NAME>HP DG072ABAB3 </NAME> <SERIALNUMBER>3NP1VTZX00009810KEVH</SERIALNUMBER> <TYPE>disk</TYPE> </STORAGES> <STORAGES> <DESCRIPTION>IDE</DESCRIPTION> <DISKSIZE>69974</DISKSIZE> <FIRMWARE>1.86?</FIRMWARE> <MANUFACTURER>LOGICAL_VOLUME</MANUFACTURER> <MODEL>LOGICAL_VOLUME</MODEL> <NAME>cciss/c0d0</NAME> <SERIALNUMBER>3600508b100104c3953555a3237330004</SERIALNUMBER> <TYPE>disk</TYPE> </STORAGES> *** HP ProLiant DL365 G1 lspci | grep -i RAID output: 46:08.0 RAID bus controller: Hewlett-Packard Company Smart Array E200i (SAS Controller) Hardware->Controllers: Smart Array E200i (SAS Controller) Hewlett-Packard Company RAID bus controller 1 Hardware->Storages: Name/Description/Size/Firmware/Manufacturer/Model/Serial/Type/Quantity cciss/c0d0 IDE 68.3 GB 1.86? LOGICAL_VOLUME LOGICAL_VOLUME 3600508b1001034323820202020200005 disk 1 Same raid mirror as in two previous cases, but I would say that the Storages inventory was collected right in this case.
Updated by Nicolas CHARLES almost 11 years ago
Alex Tkachenko wrote:
For the sake of completeness - here is one case where the system manufacturer looks weird (but I think this is the only case out of my 600+ servers):
- Sun Fire X4240
Hardware->BIOS (Name/Editor/Version): Sun Fire X4240 American Megatrends Inc. American Megatrends Inc.
<BIOS>
<ASSETTAG>Not Available</ASSETTAG>
<BDATE>10/26/2009</BDATE>
<BMANUFACTURER>American Megatrends Inc.</BMANUFACTURER>
<BVERSION>0ABMN068</BVERSION>
<MMANUFACTURER>Sun Microsystems</MMANUFACTURER>
<MMODEL>Sun Fire X4240</MMODEL>
<MSN>Not Available</MSN>
<SKUNUMBER>Not Available</SKUNUMBER>
<SMANUFACTURER>Not Available</SMANUFACTURER>
<SMODEL>Sun Fire X4240</SMODEL>
<SSN>Not Available</SSN>
</BIOS>I do not know what is wrong with it, but if the SMANUFACTURER is not available, the M/B manufacturer or the BIOS manufacturer could be used as a last resort.
Thank you Alex for this comprehensive reports.
So if I understand correctly, we should really use SMANUFACTURER, except when the value is empty or Not available, and in this case fall back to MMANUFACTURER
Updated by Nicolas CHARLES almost 11 years ago
For the RAID part, if I understand correctly, the Inventory generated by Fusion doesn't include all the necessary data, and needs to be modified. Am I correct ?
Updated by Alex Tkachenko almost 11 years ago
- File openvz-mothership.ocs openvz-mothership.ocs added
- File openvz-container.ocs openvz-container.ocs added
- File proxmox-server.ocs proxmox-server.ocs added
- File proxmox-kvm-guest.ocs proxmox-kvm-guest.ocs added
- File proxmox-openvz-container.ocs proxmox-openvz-container.ocs added
If we introduce a "System" section then yes, the vendor would be SMANUFACTURER, as you described (and the model could also be presented there). BIOS section could be left as is for backward compatibility reasons, but it probably has to be fixed to present Version properly.
For the RAID part - I do not even know where to start :) but yes, mostly it is missing data, with some rare cases where the excessive data slips through.
Here comes information related to openvz.
I am attaching several inventories (as described below). All the inventories have been edited to remove irrelevant sections (i.e. SOFTWARE,ENVS,USERLIST,PROCESSES,CFKEY). Hostname and IPs have been changed. But it still should be sufficied for illustrative purposes.
- openvz-mothership.ocs - a CentOS-based OpenVZ Mothership
Note, that ve-101 is missing from the list of Virtual machines, running on this mothership, probably because it does not have an IP address (or I should rather say, it's IP configuration is non-standard):
# vzlist -a CTID NPROC STATUS IP_ADDR HOSTNAME 101 52 running - ve-101-fqdn 102 36 running 172.16.36.102 ve-102-fqdn 103 35 running 172.16.36.103 ve-103-fqdn 104 40 running 172.16.36.104 ve-104-fqdn
- openvz-container.ocs - an OpenVZ Container running on OpenVZ Mothership
This is actually a ve-101 container - note that it does not have a network devices in the inventory, but somehow the Web GUI shows that right:
eth0 172.16.36.101 00:18:51:fb:5b:d6 Up lo 127.0.0.1 00:00:00:00:00:00 Up venet0
- proxmox-server.ocs - Proxmox Server host
If you are not familiar with Proxmox - it is essentially a Debian system running RHEL kernel with OpenVZ modifications, which could have both OpenVZ-style containers and KVM/QEMU guests. For whatever reason only one OpenVZ container is listed (ve-103) in the GUI, and KVM guests are not listed at all:
# qm list VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID 100 kvm-100 running 2048 32.00 550842 101 kvm-101 running 2048 32.00 109995 104 kvm-104 running 2048 64.00 779849 # vzlist CTID NPROC STATUS IP_ADDR HOSTNAME 102 97 running 172.16.4.222 ve-102.fqdn 103 196 running 172.16.4.223 ve-103.fqdn
Also the hostname of the only listed container was truncated mid-domain.
- proxmox-kvm-guest.ocs - KVM guest, running on Proxmox Server host (ve-104)
- proxmox-openvz-container.ocs - OpenVZ container, running on Proxmox Server host (ve-102)
Since you mentioned that Rudder does support Virtuozzo I did some research and found that you are using some OpenVZ patch to support vzps. Proxmox does have vzps installed, but my older OpenVZ installations do not - I should probably get it deployed right away. I have also figured out that you already define several openvz-related classes (virt_host_vz/virt_guest_vz), so I could rely on those while writing the policies.
As for your question re classes availability for grouping - I would say all of them would be useful, but if we start with the hard classes it would be great too. However the hard classes are mostly those which inventorying process already figured out.
Hope this all will be useful, and if you have any more questions regarding the data I have submitted, please let me know.
Thanks a ton for your support!
Updated by Nicolas CHARLES over 10 years ago
Thank you a lot for these details, they are really usefull to us.
It will take some time to be correctly processed, thought.
Many thanks
Updated by Alex Tkachenko over 10 years ago
It's all right, I understand it. I hope something good will come out of it :)
Updated by Alex Tkachenko over 10 years ago
This issue is still assigned to me - would you like me to provide more data?
Please let me know.
Thanks.
Updated by Nicolas CHARLES over 10 years ago
- Assignee changed from Alex Tkachenko to Nicolas CHARLES
Oh, sorry for the assignation.
We currently don't need more data on this one, it's a pretty big chunk to process.
I'm assigning it to myself, to process all of these.
Thank you !
Updated by Nicolas CHARLES over 10 years ago
You may not have noted, but we implemented this support for 2.11
However we have not yet implemented the RAID support
Thank you !
Updated by Benoît PECCATTE over 9 years ago
- Category changed from 26 to Web - Nodes & inventories
Updated by Benoît PECCATTE over 8 years ago
- Tracker changed from Question to User story
- Target version set to Ideas (not version specific)
Updated by François ARMAND 7 months ago
- Status changed from Discussion to Resolved
- Regression set to No
closing as resolved (and soooo old - we will need a new eye on it if there's still missing parts)