DICK POUNTAIN
11 years after declaring Moore’s law dead, Dick explains why he really, truly means it this time
A whole 11 years after hee last declared in a column that Moore’s aw is dead, Dick explains why he really, truly mmeans it this time…
It’s 11 years since I last wrote a column about the end of Moore’s law ( see issue 180!), in which time the number of transistors on a chip must have grown at least 100-fold. As I said in that column, just as in one four years prior, declaring the end of Moore’s law is a mug’s game. It’s a bit like predicting the Second Coming or the arrival of a Covid-19 vaccine.
As far back as a 1997 article for Byte,
I’d predicted that lithographic limits and quantum effects would flatten the curve below 100nm feature size, and I was only off by one order of magnitude. That counts as a win in this futile race. Intel’s latest fabrication plant, built to produce chips with minimum 10nm feature size, was very late indeed and only started delivering chips in 2019, five years after the previous 14nm generation of chips.
And so, in the past few months a chorus of commentators have been declaring that this time it’s for real: high-performance computing pioneer Charles Leiserson of MIT has remarked that “Moore’s law was always about the rate of progress, and we’re no longer on that rate”. It’s not just those physical limits on feature size I was writing about, but economics too. The cost of building a new fab has been rising by 13% year on year, and is headed north of $16 billion, at precisely the time when Donald Trump, that great tech entrepreneur, is calling for US companies to bring chip fabrication back home as part of his trade war on the Far East. Only Intel, AMD and Nvidia can even contemplate a lower level of feature size (and Nvidia’s not that sure).
Of course, reaching bottom in feature size doesn’t mean the end of all progress in computing power. One effect of Moore’s law is to encourage software bloat – why bother writing efficient code if next year’s chip will speed up today’s crappy code? This is a problem waiting to be tackled: most of today’s commercial software could probably be sped up enormously by a decent rewrite. But another problem is that rewriting code is almost as expensive as fab-building.
Parallelism looked like the solution for a long time, and it sort of was: even the cheapest mobile phones today use multicore processors, and AMD is selling 16-core desktop chips now. The thing is, the more cores you build into a chip, the more of the silicon real estate gets eaten up by interconnect and, what’s worse, the model of parallelism employed for x86 family processors isn’t automatically exploitable by old software without a rewrite.
There are also two very different groups of people who need the extra power of multiple cores: games vendors and AI developers. The former have the cash to rewrite their games for each generation of CPU. The latter need far more parallelism than these chips offer, and so are headed off along a path toward special-purpose processor. Such “intelligent processing units” can speed up the kind of massive matrix and convolution calculations performed during deep learning by several orders of magnitude – the problem is they can’t run Animal Crossing, Google Chrome or Microsoft Word. They’re not general-purpose processors. They are, however, potentially incredibly lucrative, since they will eventually end up in every mobile phone, Alexa-style interface or self-driving vehicle. Venture capital is queuing up to invest in them just as mainstream processors begin to look less like a hot tip. Neil Thompson, an economist at MIT’s AI centre, has just written a paper called The Decline of Computers as a General Purpose Technology, which gives you an idea of the drift.
Moore’s law is underpinned by the scaling behaviour of CMOS fabrication technology, and this is what we’re approaching the end of. Professor Erica Fuchs of the Department of Engineering and Public Policy at Carnegie Mellon University worries that a successor technology with equally benign scaling properties, that could maintain Moore’s law for general-purpose chips, is as yet unknown and may take years of basic research and development to find with no guarantee of success.
Candidates might include carbon nanotubes, graphene transistors, spintronics or even the dreaded qubits, but none of these are obvious replacements for CMOS scaling. She calls for a huge boost in public research funding to replace all the venture capital that’s being diverted into special-purpose AI chips. Unfortunately, the colossal cost of the Covid-19 pandemic is likely to make that a very hard sell indeed, given that most politicians have little idea of what chips do at all, let alone the subtle distinctions between specialand general-purpose ones.
Moore’s law encourages software bloat – why bother writing efficient code if next year’s chip will speed up today’s crappy code?
A successor technology with equally benign scaling properties is as yet unknown and may take years of basic research to find