I recently took delivery of a new laptop. It has a super fast dual processor core. I know this for sure because so much of the marketing around the machine was focused on how wonderful it is to have dual processors whizzing along at all sorts of giddy gigahertz heights.
Now if I had been handed the laptop in some sort of Pepsi challenge I would have no easy way to detect the two processors. I can honestly say that my day-to day experience with the machine gives me no sense of the bottomless computing power the marketing collaterals gushed about.
The marketing isn't wrong of course. It just is not the whole story. End-user impact of extra processors can only be determined when all the other factors are looked at too: RAM, disk contention and so on.
This is not a new problem of course. Finding ways to benchmark databases, transaction processors, CPUs, compilers etc. is an old chestnut.
Back in the early days of the PC revolution we had the Whetstone benchmark. It sprung all sorts of derivatives including Dhrystones. More recently we have TCPP for OLTP benchmarking. Python has Pystones and Linux has BogoMips.
And yet and yet...my problem is that none of these help me assess the true performance of my wee laptop. Again, back in the early days of the PC, there was a community-created ad-hoc benchmark I remember using : Microsoft Flight Simulator. If your PC could run this well -the argument went - you could be sure of reasonable graphics, reasonable number crunching and reasonable compatibility with the original IBM PC.
Today I have less time for cool games and more time for wrangling large word-processor fines and munching 20k line spreadsheets and running hefty IDEs. I would love to be able to point to benchmarks based on, say, OpenOffice performance or Eclipse performance and use these as real-world indications of the true user experience of all that CPU goodness.
Unfortunately, the concept of performance benchmarks - stones - seems to have fallen by the wayside.
It is a concept worth revisiting I think.