Intel's latest earnings call was a real stunner because it confirmed a rumor I'd covered here and at the same time seemed to spell the end of Moore's Law, which has guided the company for 40 years.
On the call, the company confirmed that it will build a third generation of processors with the 14nm process, taking another year than usual to get to a process shrink. The move to 10nm would not come until the second half of 2017.
This marks the end of the "tick/tock" strategy, where "ticks" were process shrinks, and "tocks" were whole new architectures. Each tick and tock was about one year apart. Intel introduced this method in 2007 because it found trying to do new architectures and die shrinks at the same time had become too challenging and introduced too much risk.
Most recently, it released the "Haswell" generation of chips as its latest "tock," built on a 22nm process design. A year later came the "tick," Broadwell, which was shrunk to just 14nm. But Intel struggled with 14nm and Broadwell came late, fouling up the launch of 14nm "Skylake," a whole new architecture to replace Haswell.
Skylake comes this year, and next year was supposed to be "Cannonlake," a 10nm shrink of Skylake with relatively minor changes. But Intel has struggled with 10nm as well and has announced there will be a second "tock" with the release of "Kaby Lake."
As rumored by the Chinese hobbyist site benchlife.info, Kaby Lake will be a 14nm part, the third after Skylake and Broadwell. It will be available in a range of designed for everything from ultra-low power parts for laptops and tablets up to full desktop chips.
A lot of people in the tech press are wringing their hands over this signaling the demise of Moore's Law: the rule of thumb that Intel co-founder and former CEO Gordon Moore coined that says transistor density will double every 18 to 24 months.
Moore was an engineer and I have no doubt his intentions were pure, but since his time, Moore's Law has come to be the most successful marketing gimmick and implementation of planned obsolescence in history. Hey, you're computer is lagging behind. We've got the latest and greatest with twice as many transistors. Time to buy a new one.
The fact is, shoehorning transistors into a CPU isn't accomplishing much anymore, and really hasn't for some time. The jump from Sandy Bridge to Haswell was somewhere between 10% and 15%, depending on the application. For Haswell to Skylake, it's even more modest, just 6.7% improvement.
I'm waiting, and somewhat hoping, for more and better benchmarks because my understanding of Skylake is all of the internals are new. With faster PCIe and USB, internal transfers should be much faster, especially with PCIe SSD drives.
But let's assume the Hexus numbers are in line with what we will get. Two years of work by some of the smartest electrical engineers in the world for a 6.7% performance increase? Good luck selling that to anyone with a PC less than three years old. As it is, PC sales have slumped in part because PCs are now so quick and responsive – especially thanks to SSDs – that people are content with what they have and are hanging onto their PCs longer.
A decade ago, Intel ran into the performance wall when it couldn't crank up the clock speed of single-core CPUs beyond a certain point due to heat. So it found performance improvements by adding multiple cores. It got more performance with the Turbo boost, which shut off idle cores and cranked up the one that was in use for single-core performance.
For a decade, it worked out well. Now it seems multi-core is out of running room as well. They don't need to worry about putting more transistors in a CPU, they need to somehow get back to the days when a new CPU architecture meant a 50% to 100% jump in performance without requiring liquid nitrogen to cool it, and I don't see that happening.
There is plenty of opportunity in areas outside the CPU. SATA III is a real drag on the PC and desperately needs a boost. They could always increase the CPU bus, which has been unchanged for years. They could push to kill USB 2.0 and get everyone on to 3.1, which would be a huge performance increase, but as Microsoft has learned, entrenched technologies go away about as easily as dandelions.
Either way, I'd be more worried about increasing performance than doubling transistors. Moore's Law was great planned obsolescence for a long time. Now it's holding Intel to an unreasonable set of expectations that it should not be obsessing over.