The Future of Moore’s Law,
Should we start getting used to a future of processor stagnation, with no expectation of speed improvements? Mike Bedford investigates
erhaps it seems like a very long time ago that we first started hearing about the impending end of Moore’s Law; and in fact, a quick search reveals such predictions going back to 1997. We’ll see later just what Moore’s Law does and doesn’t say, but the sentiment of these predictions was that a time will come when the relentless improvements in processor performance that we’ve grown to expect will come to an end. And when that time comes, we’ll be faced with a stark realisation: this is as good as it gets.
Well, it seems you might have to start coming to terms with that outlook sooner rather than later because, according to some industry experts, Moore’s Law has already come to an end. We did hope to get Intel’s view on this question; after all, Gordon Moore, who gave his name to the law, was one of Intel’s founding fathers.
But while, historically, Intel has tended to wax lyrical about Moore’s Law, on this occasion, the company declined to answer our questions on the subject. We’ll leave you to come to your own conclusions about this. In the meantime, however, we’ll present our own analysis of the future of Moore’s Law and come to a view as to whether the long-standing merchants of doom are, for once, correct in predicting the end of an era.
What is Moore’s Law?
Moore’s Law has commonly been misquoted to say, for example, that it predicts a doubling in processor performance every couple of years. While this might not be too far from the mark, this prediction would be an implication of Moore’s Law, as opposed to the substance of the law. In fact, when Gordon Moore made his famous statement, the first microprocessor was still six years away, the semiconductor industry was making discrete logic devices, and Intel was about to launch the world’s first memory chips.
A 1965 article by Gordon Moore, published in the industry magazine Electronics, suggested that “the complexity for minimum component costs has increased at a rate of roughly a factor of two per year”. He followed this by suggesting that “certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”
If we remove the industry jargon, Moore was saying that the number of transistors in mainstream chips had doubled every year and he predicted that trend would hold true for at least ten years. He later amended this prediction to a doubling every two years, and that ten-year prediction has been extended over and over.
Although they didn’t exist back in 1965, today it’s reasonable to apply Moore’s Law to microprocessors. The graph (starting p66) shows
the number of transistors per device from the first ever microprocessor, the Intel 4004, to the latest and greatest of a couple of years ago. We’ve concentrated on Intel chips, but that’s not unreasonable since we’re discussing an Intel prediction. Extending it to the current date requires a degree of speculation, because Intel no longer quotes transistor counts. The graph has a logarithmic vertical axis which means that any exponential growth – that is a doubling in a fixed period of time – appears as a straight line. The straight line shows the trend that would be expected if that doubling occurs every two years as predicted by Moore’s Law.
You’ll notice that in recent years most new products fell below that line, even though some of the more recent chips shown are high core-count, top-end Xeon chips. You might also notice that chips fell behind from around 1992 but, in time, the long-term trend was reestablished. Naively we might assume that the current below-par performance might be another short-term glitch, but there’s more to this than meets the eye.
While the graph of transistor counts doesn’t seem to suggest that Moore’s Law has hit the buffers quite yet, or at least not so badly that it can’t recover, another trend does appear to be in trouble. That trend is the continual decrease in a chip’s feature size.
Moore was saying that the number of transistors in mainstream chips had doubled every year
If you take more than a passing interest in the chips that drive your PCs, you can’t fail to have noticed that feature sizes decrease every few years, even if you’re not entirely sure what that means. Indeed, if you do struggle to fully understand the term, you wouldn’t be alone, because this has become a matter of fierce debate in recent years. This being the case, let’s use a rather vague definition and say that the feature size is the size of individual features in a microprocessor – without getting embroiled in whether that’s the size of a single transistor or a component part of a transistor.
When the 4004 processor first hit the streets in 1971, the feature size was 10 microns – which, if we express it in the unit more commonly used today – is 10,000nm. At somewhere between a third and a tenth of the width of a human hair, this was surely considered quite an achievement almost 50 years ago, even if it appears positively huge by today’s standards.
Since then, feature sizes have tumbled, dropping to 1,000nm by 1985, and they have, more recently, reduced by a factor of around 1.4 to 1.5 per generation, which equates to a halving in terms of area. By and
large, today’s chips have a 14nm feature size, and the technology used to achieve that is referred to as the 14nm process.
While a reduction in feature size might have paralleled the increase in transistor counts over the decades, it might not be immediately obvious that there’s a link between the two trends. At one time, decreases in feature size fuelled the increase in the clock frequency of a processor. So while the 4004 had a clock speed of 740kHz, by 1985 – when the feature size hit 1,000nm – the highest clock frequency achieved by the 80386 was 33MHz. By 2004, the 90nm process allowed the Pentium 4 to be clocked at 3.8GHz.
In the intervening 14 years, clock speeds have barely increased beyond this figure; indeed many of today’s best-sellers actually have lower clock speeds. Even esoteric chips produced by binning – selecting those devices that tests show are capable of exceeding their design speed – can manage just 4.4GHz, with the option to boost a single core to 5GHz, temperature and power consumption permitting. Had speeds continued their previous trend, today’s best processor would be clocked at 100GHz but, of course, escalating power consumption derailed this particular trend.
While clock speeds might have levelled out, feature sizes have continued to fall, for one very important reason: it’s the driver of Moore’s Law. An example will help shed some light on this. The Intel 4004 had 2,300 transistors and the chip measured 3 x 4mm. Had the feature size not reduced from 10,000nm, a high-end desktop chip such as the forthcoming Intel Core i9, with its estimated 7 billion transistors, would measure around 5m by 7m.
Admittedly, it’s rather extreme to consider a current processor built using a 47-year-old process, even if it does lead to the bizarre picture of a chip that wouldn’t come close to fitting in a desktop PC, let alone a handheld device. However, miniaturisation is about