PC Pro

DICK POUNTAIN

11 years after declaring Moore’s law dead, Dick explains why he really, truly means it this time

- dick@dickpointa­in.co.uk

A whole 11 years after hee last declared in a column that Moore’s aw is dead, Dick explains why he really, truly mmeans it this time…

It’s 11 years since I last wrote a column about the end of Moore’s law ( see issue 180!), in which time the number of transistor­s on a chip must have grown at least 100-fold. As I said in that column, just as in one four years prior, declaring the end of Moore’s law is a mug’s game. It’s a bit like predicting the Second Coming or the arrival of a Covid-19 vaccine.

As far back as a 1997 article for Byte,

I’d predicted that lithograph­ic limits and quantum effects would flatten the curve below 100nm feature size, and I was only off by one order of magnitude. That counts as a win in this futile race. Intel’s latest fabricatio­n plant, built to produce chips with minimum 10nm feature size, was very late indeed and only started delivering chips in 2019, five years after the previous 14nm generation of chips.

And so, in the past few months a chorus of commentato­rs have been declaring that this time it’s for real: high-performanc­e computing pioneer Charles Leiserson of MIT has remarked that “Moore’s law was always about the rate of progress, and we’re no longer on that rate”. It’s not just those physical limits on feature size I was writing about, but economics too. The cost of building a new fab has been rising by 13% year on year, and is headed north of $16 billion, at precisely the time when Donald Trump, that great tech entreprene­ur, is calling for US companies to bring chip fabricatio­n back home as part of his trade war on the Far East. Only Intel, AMD and Nvidia can even contemplat­e a lower level of feature size (and Nvidia’s not that sure).

Of course, reaching bottom in feature size doesn’t mean the end of all progress in computing power. One effect of Moore’s law is to encourage software bloat – why bother writing efficient code if next year’s chip will speed up today’s crappy code? This is a problem waiting to be tackled: most of today’s commercial software could probably be sped up enormously by a decent rewrite. But another problem is that rewriting code is almost as expensive as fab-building.

Parallelis­m looked like the solution for a long time, and it sort of was: even the cheapest mobile phones today use multicore processors, and AMD is selling 16-core desktop chips now. The thing is, the more cores you build into a chip, the more of the silicon real estate gets eaten up by interconne­ct and, what’s worse, the model of parallelis­m employed for x86 family processors isn’t automatica­lly exploitabl­e by old software without a rewrite.

There are also two very different groups of people who need the extra power of multiple cores: games vendors and AI developers. The former have the cash to rewrite their games for each generation of CPU. The latter need far more parallelis­m than these chips offer, and so are headed off along a path toward special-purpose processor. Such “intelligen­t processing units” can speed up the kind of massive matrix and convolutio­n calculatio­ns performed during deep learning by several orders of magnitude – the problem is they can’t run Animal Crossing, Google Chrome or Microsoft Word. They’re not general-purpose processors. They are, however, potentiall­y incredibly lucrative, since they will eventually end up in every mobile phone, Alexa-style interface or self-driving vehicle. Venture capital is queuing up to invest in them just as mainstream processors begin to look less like a hot tip. Neil Thompson, an economist at MIT’s AI centre, has just written a paper called The Decline of Computers as a General Purpose Technology, which gives you an idea of the drift.

Moore’s law is underpinne­d by the scaling behaviour of CMOS fabricatio­n technology, and this is what we’re approachin­g the end of. Professor Erica Fuchs of the Department of Engineerin­g and Public Policy at Carnegie Mellon University worries that a successor technology with equally benign scaling properties, that could maintain Moore’s law for general-purpose chips, is as yet unknown and may take years of basic research and developmen­t to find with no guarantee of success.

Candidates might include carbon nanotubes, graphene transistor­s, spintronic­s or even the dreaded qubits, but none of these are obvious replacemen­ts for CMOS scaling. She calls for a huge boost in public research funding to replace all the venture capital that’s being diverted into special-purpose AI chips. Unfortunat­ely, the colossal cost of the Covid-19 pandemic is likely to make that a very hard sell indeed, given that most politician­s have little idea of what chips do at all, let alone the subtle distinctio­ns between specialand general-purpose ones.

Moore’s law encourages software bloat – why bother writing efficient code if next year’s chip will speed up today’s crappy code?

A successor technology with equally benign scaling properties is as yet unknown and may take years of basic research to find

 ??  ?? Dick Pountain is editorial fellow of PC Pro. He plans to leave it for another decade before he writes another Moore’s law column.
Dick Pountain is editorial fellow of PC Pro. He plans to leave it for another decade before he writes another Moore’s law column.
 ??  ??

Newspapers in English

Newspapers from United Kingdom