Is there anything new on bioctl side?
Marco Peereboom: Unfortunately, no one worked any more on this and I got sidetracked working on IPMI and ACPI. We did get lots of good feedback from the community. I am hoping to work some more on this for this release.
apmd now supports automatic performance adjustment for CPUs which provide this feature. How does it choose how much and how often change the CPU frequency?
Nikolay Sturm: The idea is very simple, apmd needs to figure out when the CPU is busy and then raise its frequency to get maximum performance. Once the CPU is idle again, the frequency can be lowered to save energy and reduce the noise level.
In practice however, this is more complicated. First, you need a good metric. Some people believe the system's load is that metric, but that's wrong. The load is just the average number of processes running plus the ones wanting to run. This does not tell us anything about the actual CPU usage.
The kernel actually measures where the CPU spends its time, regular userland processing, nice userland processing, kernel processing, and interrupt handling. Another counter measures the amount of time the CPU is idle. And this is the correct value to look at.
Unfortunately, this opens a new can of worms, as you are free to look at these counters as often as you want. The more often you look and possibly adjust the CPU frequency, the better your overall performance but the less stable your CPU becomes.
One of the reasons that made me implement this feature was that other tools were adjusting the CPU frequency multiple times per second, so it was jumping around all the time.
Having a background in physics, I prefer stable systems, so I watched my computer doing stuff and trying to figure out a good-enough time scale for apmd's measurements. It turned out looking once a second was enough to still get decent performance, but does not change the frequency too often, because CPU-intensive tasks normally last either much shorter or much longer.
Figuring out how much to change the frequency was simple, as we can only set the frequency to certain discrete values, the amount of which varies from CPU to CPU. On average however, we have four or five values. In order to get decent performance, going up in 50% steps and down in 20% steps was considered acceptable.
This way, a CPU-intensive task will take about two seconds to bring the CPU to full speed, but when it does a little I/O in between, reducing CPU usage, the frequency is not dropped too much, so it can happily work on.
Bob Beck: The automatic performance mode Nikolay describes above can be enabled in two ways.
apmd -A starts apmd in automatic performance adjustment mode. In this mode, the performance adjustment is done only when the system is running on battery power, or if the battery charge is low. Otherwise, the CPU is left to run at full speed.
apmd -C starts apmd in cool running performance mode, which will adjust the CPU speed regardless of the power connection status.
What is the status of the G5 port?
Mark Kettenis: The iMac G5 is perhaps the best supported model now; Xorg works out of the box, and even the built-in ambient light sensor works. The only things that don't work are FireWire and the built-in Airport Extreme. It's difficult to support the Airport Extreme because Broadcom refuses to give us documentation for it.
Support for the various PowerMac G5 models isn't complete. We haven't tested all models, but it seems the dual core G5's don't work at all. On the dual CPU models, the onboard SATA controller doesn't work yet, but you can always hook up an USB disk to those. The single CPU models might just work.
The Xserve G5 also has the onboard SATA controller problem.
Although the G5 is a 64-bit CPU, OpenBSD runs in 32-bit mode. It runs the same kernel and userland as the older G3/G4 machines supported by macppc. But if the kernel detects a G5, you'll automatically get the W^X protection that the CPU provides.
Do you prefer this CPU over x86 ones? How many features of this architecture do you plan to support now that Apple dropped it?
Mark Kettenis: The x86 architecture is just plain ugly because of all the legacy code it needs to support, although the IBM 970FX/MP (the CPU Apple calls G5) has its dark corners too.
We have no plans for dropping support for the macppc platform. We still support the mac68k platform that runs on Apple machines with a Motorola 68000 CPU, the CPU used by Apple before they switched to PowerPC.
For now, Apple is still selling G5 machines. And when they stop doing that, PowerPC won't be dead. IBM's new Cell Broadband Engine is PowerPC-based and might end up in interesting and affordable machines in the near future.
Does OpenBSD 3.9 work on Intel-based Macs?
Mark Kettenis: No, but since I (and a few other developers) have one now, 4.0 probably will. ;-)
Part of the rthreads implementation has been included in the kernel, but it is not built by default. What type of development road map should we expect?
Ted Unangst: Signal handling and scheduler modifications are the next big steps. After that, whatever refinement is necessary to make the system robust.
Signal handling is rather difficult because signals are sent to the process, but the actual delivery is done to a thread. Each thread can set its own signal mask, and it's the kernel's job to find a recipient. Some signal actions, however, like suspend and continue, should be sent to all threads, since you don't want only one thread to stop.
GENERIC kernel now includes multicast routing. How does it work?
Esben Norby: Multicast routing has been around for a very long time, but not that many people use it. I think there are several reasons for the small usage of multicasting in general. Multicasting (the ability to send the same packets from one source to several receivers) lacks a killer application. That killer application could be video streaming, radio streaming, etc.
I've been using (video streaming) the multicast routing stuff with OpenBSD for a couple of years now, but at this point it is time to clean up the whole multicast mess.
The only thing new about multicast routing in 3.9 is the ability to enable it via
sysctl, no more recompiling kernels. :-)
Before multicast routing can be used a multicast routing protocol must be present. Currently, base only supports DVMRP with the
mrouted(8) daemon. Mrouted(8) is old, scary, and weird and only supports DVMRP. More modern multicast routing protocols exist like PIM. DVMRP is not bad per se, I use it in a medium-sized network (about 50 routers).
Currently, work is ongoing to improve the multicast routing protocol situation. In multicast routing, the routing table is a bit different and the forwarding function acts different. The multicast routing table not only contains the outgoing direction, but may contain several outgoing directions. For example, a packet arrives on interface 1 and the destination is 220.127.116.11 (address in the special Class D range), the packet is forwarded out on interface 3, 4, and 7 because hosts on those interfaces (or other multicast routers on behalf of hosts) have requested interest in that particular group.