I think watchdogd is another good tool for a server, but I'm wondering--are watchdog timers part of common motherboards?
Alexander Yurchenko: Most chipset vendors include a watchdog timer into their integrated circuits, although not every motherboard manufacturer actually uses it. For example, a popular Intel 6300ESB chipset has a watchdog timer, and OpenBSD 3.8 includes a driver for it--
ichwdt(4)--but you should refer to your motherboard manual to see if you can benefit from it.
One tool to manage them all. That should be
bioctl, a RAID management interface. This is the first version of the tool, so, what type of features does it provide already?
Marco Peereboom: It provides the bare necessities to do RAID management without rebooting. As a matter of fact, Theo wrote a very informative email to misc@ in which he explains the functionality.
The idea is to use the BIOS of the RAID card to create and delete RAID volumes and set some values like rebuild rate, alarm enable/disable, etc. Then use
bioctl(8) to monitor the RAID HBA while running inside the OS. When a RAID volume fails, we can replace the bad disk with a new one or rely on the RAID card to start rebuilding on a hot spare. After a failed disk is replaced,
bioctl(8) provides the mechanics to make the unused disk into a hot spare. This and a few more options (e.g., enable/disable alarm etc.) and commands (e.g., blink disk, create hot spare, etc.) essentially provides a full-fledged RAID management solution. Vendor solutions have a lot more options, but after evaluating those and really thinking about this, we came to the conclusion that those solutions are too complex and riddled with useless functionality. Don't get me wrong; we have plenty of work ahead of us. However, from a "What do we really use?" perspective, I think we are pretty darn close.
My experience has shown the following usage pattern:
- Create RAID volumes in BIOS
- Install OS
- Install software to do something
- Monitor OS & hardware
When upgrading becomes a necessity, people usually do this:
- Practice upgrade on secondary server
- Schedule downtime
- Execute upgrade
- Test and redeploy server
- Resume "Monitor OS & hardware"
Most likely failure scenario:
- Disk goes bad
- Hot spare kicks in and rebuilds from parity, essentially replacing the bad disk
- Operator physically replaces bad disk
- Operator makes newly inserted disk a hot spare
- Resume "Monitor OS & hardware"
bioctl(8) provides all necessary functionality to perform these steps.
Most of the monitoring magic is handled by the RAID firmware. So what we really had to do is come up with a "common language" that should work on all RAID controllers. The API we came up with is simple and should pretty much translate into just about any RAID card I have played with. Undoubtedly we'll run into some issues, but we'll deal with those when they appear. What makes this an interesting exercise is that the more RAID cards we support, the easier this'll become.
David Gwynne: This is minimal compared to the management tools provided by vendors, which provide things like the ability to create and destroy RAID sets all the way up to flashing the firmware on your controller. Since all this functionality is present in the controller's BIOS when you boot the machine, we considered most of this to be just fluff when you're actually in the operating system and running a production server. If you're going to modify the RAID sets and change those parameters, your machine is going to be out of service in a maintenance window, and rebooting isn't a problem. The real problem when you're running a machine is how to tell when your RAID sets are degraded and how to fix them. So keeping that in mind, we made a conscious decision to support the bare minimum that will be common across all controllers.
Is there any vendor that chose to contribute with hardware or specifications?
Marco Peereboom: LSI has been very nice in providing hardware, certain pieces of documentation, and engineering help. In the end, to make all this happen, there was quite a bit of reverse engineering done as well.
OpenBSD has not received any documentation or help from other vendors. This is really sad if you think about it. It is apparent that a lot of OpenBSD users want RAID and RAID management; not many days go by without someone on the lists asking about what products to buy and how to manage them. The answers have been pretty boring since everyone is pointed to a single vendor. The vendors that have not cooperated with OpenBSD are clearly losing business.
I honestly do not understand why vendors are so secretive about all this. All these products are essentially the same. Here is how a common command looks like:
- Send some command to firmware
- Wait for completion (either polled or via callback)
- Parse results
If you look at the code involved in getting this done all the way from userland into the RAID firmware, you'll realize that it is really trivial. Someone explain to me why setting a handful of values in a structure is considered IP.
David Gwynne: Personally I would appreciate documentation from a vendor more than hardware. Our community seems to be enthusiastic enough about what we're doing that they'll chip in for hardware when it is needed.
What is in the pipeline for future releases?
David Gwynne: At the moment, I am going over the ami driver and trying to clean it up and optimize it a bit. There are changes coming in the bio and sensor frameworks that will make things better; however, to most people these changes are mostly boring and transparent.
Marco Peereboom: On my own list I got adding
bio(4) and streamlining
ami(4) support. In the not too distant future, SAS (Serial Attached SCSI) products will start to arrive in the field. I want to add support for those products as well.
In other words, I am not going to be bored. As a matter of fact, there is so much work in this area that we really could use some help. If there are any folks out there with the skills and time to work on this, please let me know.