Business World

Uncertaint­y becomes new normal as era of Moore’s law draws to a close

- By Howard Yu Howard Yu is professor of strategy and innovation at IMD Business School with campuses in Switzerlan­d and Singapore. This article was originally published in the South China Morning Post.

WHEN a product’s performanc­e is improved beyond a singular dimension, as historical­ly dictated by Moore’s law, roles and responsibi­lities blur.

Last month, the world’s biggest chipmaker, Intel, whose brand is synonymous with personal computers and laptops, announced that its former chief executive Paul Otelini had passed away in his sleep at the age of 66. As the fifth chief executive of the company, Otelini presided over the period of largest growth in the company, raising the annual revenue from $34 billion to $53 billion in 2012. In fact, more money was made under his eight year reign than in the previous 37 years of Intel’s existence. No other company can fire out a better and faster microproce­ssor, the engine that spurs into motion when you turn your computer on.

In 1965, Intel cofounder Gordon Moore made a bold prediction about the exponentia­l growth of computing power. From the vacuum tube to the discrete transistor to the integrated circuit, the miniaturiz­ation of computer hardware had progressed apace. Extrapolat­ing the trend, Moore asserted that the number of microchip transistor­s etched into a fixed area of a computer microproce­ssor would double every two years. Since transistor density was correlated with computing power, the latter would also double every two years. As improbable as it might have seemed, Intel has since delivered on this promise, immortaliz­ing “Moore’s law.”

It’s difficult for anyone to fathom the effects of exponentia­l growth. Take an imaginary letter-sized piece of paper and fold it in half. Then fold it a second time and then a third. The thickness of the stack doubles every time. If you manage to fold the same piece of paper 42 times, it will be so thick that it stretches all the way to the moon. That’s exponentia­l growth. Exponentia­l growth explains why a single iPhone today possesses more computing power than the entire spacecraft for the Apollo moon mission back in 1969. Without Moore’s law, there would be no Google, no Facebook, no Uber, no Airbnb. Silicon Valley would just be like any other valley.

When I was at a conference in Israel, a former Intel executive told me that Gordon Moore could get “rather philosophi­cal” about the future of Moore’s law. When asked by his staff when this amazing trajectory might end, the cofounder responded, “Nothing grows exponentia­lly forever.” And indeed, Intel was no exception.

In 2016, Intel disclosed in a regulatory filing that it was slowing the pace of launching new chips. Its latest transistor is down to only about 100 atoms wide. The fewer atoms composing a transistor, the harder it is to manipulate. Following this trend, by early 2020, transistor­s should have just 10 atoms. At that scale, electronic properties would be disturbed by quantum physics, making any devices hopelessly unreliable. Samsung, Intel, and Microsoft have already shelled out $37 billion just to keep the magic going, but soon enough, engineers and scientists will be hitting the fundamenta­l limit of physics.

The imminent demise of Moore’s law, however, doesn’t mean a total pause in new product hype. It doesn’t mean that virtual reality headsets, the Internet of Things, and artificial intelligen­ce are all smoke screens. It won’t stop machines from taking away more white-collar jobs — although it’s a nice thought. What it does mean, however, is that technologi­cal drivers will switch away from mere computing horsepower and focus their attention elsewhere, such as on more clever software design. Despite 50 years of staggering­ly increasing computing brawn, commensura­te developmen­t in software has taken a back seat. Charles Simonyi, a computer scientist who oversaw the developmen­t of Microsoft Word and Excel, said in 2013 that software had failed to leverage the advances that have occurred in hardware. The temptation to rely on hardware’s brute force to mask inelegant software design had been too strong. A prime example is in the area of artificial intelligen­ce.

Until very recently, computers required programmer­s to write instructio­ns. Computers generally don’t learn autonomous­ly; they follow rules. However, Google has demonstrat­ed that machines can learn on their own, becoming better and smarter without human supervisio­n. When its program AlphaGo trounced Chinese Go grandmaste­r Ke Jie earlier in May, Ke took note of his opponent’s unique and sometimes transcende­nt style of play: “AlphaGo is improving too fast. Last year, it was still quite humanlike when it played. This year, it became like a god of Go.”

The all-powerful AlphaGo was made possible because of “deep learning,” a software design approach that mimics the working of neurons in the human brain. Google’s software engineers have somehow figured out how to “reward” a program in the form of higher scores when the algorithm achieves the desired outcome. AlphaGo then writes its own instructio­ns randomly, generating many instructio­ns on a trial and error basis and replacing lower-scoring strategies with those higher-scoring ones. That’s how an algorithm teaches itself to become better, without constant human supervisio­n.

When a product’s performanc­e is improved beyond a singular dimension, as historical­ly dictated by Moore’s law, roles and responsibi­lities blur. Software firms are enticed to dabble in hardware, and hardware makers create, in turn, niche products. Facebook and Amazon are already designing their own data centers, Microsoft has started making its own chips, and Intel is now jumping into virtual reality technologi­es. Unlike in the innocent era of desktop computers, we will no longer have the dominant architectu­re of Windows and Intel. Gone will be the existing industry order. And so, in the age of cloud computing, artificial intelligen­ce, and the Internet of Things, choices and competitio­n will proliferat­e.

For non-IT companies, purchasing will become more complicate­d. Managers will no longer be able to look for the industry’s best practice and buy off-the-shelf-solutions outright. More investigat­ion and negotiatio­n will be commonplac­e. The passing of Paul Otelini will always remind us of a simple, innocent world that we’ll dearly miss.

 ??  ?? GOOGLE’S AlphaGo, the first computer program to defeat a profession­al human Go player, was made possible because of “deep learning,” a software design approach that mimics the working of neurons in the human brain.
GOOGLE’S AlphaGo, the first computer program to defeat a profession­al human Go player, was made possible because of “deep learning,” a software design approach that mimics the working of neurons in the human brain.

Newspapers in English

Newspapers from Philippines