To get the full benefit from multiple cores, developers need to use parallel programming techniques. It remains a difficult discipline to master and hasn't been used much, outside of specialized scientific programs such as climate simulators.
Perhaps a better way to deal with multiple cores is to rethink the way operating systems handle these processors, Probert said. "Really, the question is, not how do we do parallel, but what do we do with all these transistors?"
The current architecture of operating systems is based on a number of different abstractions, he explained.
In the early days of computing, one program was run on a single CPU. When we wanted multiple programs to run on a single processor, the CPU time was sliced up into processes, giving each application the illusion that it was running on a dedicated CPU.
The idea of the process was an abstraction, and wouldn't be the last one. Once the OS started juggling multiple programs, it needed a protected space, free from user and program interference. Thus was born the kernel mode, which is separate from the space in which the programs were run, the user mode. In effect, kernel mode and user mode abstracted the CPU into two CPUs, Probert said.
With all these virtual CPUs, however, come struggles over who gets the attention of the real CPU. The overhead of switching between all these CPUs starts to grow to the point where responsiveness suffers, especially when multiple cores are introduced.
But with Intel and AMD predicting that the core count of their products will continue to multiply, the OS community may be safe in jettisoning abstractions such as user mode and kernel mode, Probert argued.
"With many-core, CPUs [could] become CPUs again," he said. "If we get enough of them, maybe we can start to hand them out" to individual programs.
In this approach, the operating system would no longer resemble the kernel mode of today's OSes, but rather act more like a hypervisor. A concept from virtualization, a hypervisor acts as a layer between the virtual machine and the actual hardware.
The programs, or runtimes as Probert called them, themselves would take on many of the duties of resource management. The OS could assign an application a CPU and some memory, and the program itself, using metadata generated by the compiler, would best know how to use these resources.
Probert admitted that this approach would be very hard to test out, as it would require a large pool of existing applications. But the work could prove worthwhile.
"There is a lot more flexibility in this model," he said.