Quantum computing
What is it, and what can it really do?
Google says it has accomplished ‘quantum supremacy’. This is a fancy way of saying its Sycamore processor can do something unique. It worked out a complex maths problem in three minutes and 20 seconds. The searchengine giant says a state-of-the-art supercomputer would struggle to do it in under 10,000 years. This is because Sycamore isn’t just an upgrade on existing technology… it’s a completely different way of working.
Sycamore is a quantum computer, meaning it’s supercharged by the strange behaviour of particles. This advanced processing power could help cure dementia or invent artificial intelligence, so it’s no surprise other tech firms are hard at work developing their own versions. Even governments are investing billions into their own research. In fact, the rivalry between the United States and China has been called the ‘21st-century Space Race’.
While Sycamore is a giant leap for Google, it’s only the first step for this tech revolution. The underlying physics that make quantum computing so extraordinary also cause some of its biggest challenges. And we haven’t solved many of them yet.
To understand how quantum computers work, you need to wrap your head around a mindboggling fact: objects can be in two places at once. This is very hard to grasp, in part because it’s not how we perceive things, and also because for centuries Isaac Newton and other scientists have said the world follows predictable patterns. For instance, an apple always falls down to the ground, even if it bonks you on the head first. And if you take that apple home and put it in your kitchen, you’re not suddenly going to find it in your bathroom.
But these rules don’t apply at the subatomic level. This is what ‘quantum’ means: the smallest amount – or quantity – we can measure, the building blocks of the universe. In the early 20th century, scientists like Niels Bohr, Werner Heisenberg and Erwin Schrödinger found that though particles can be found almost anywhere, the certainty of finding one in any particular place is zero. This is because particles can be in two places at once. For instance, electrons spin both up and down simultaneously.
Physicists call this behaviour ‘superposition’. To complicate matters, superposition only happens when we’re not looking. The moment we try and measure it, the particles lose their uncertain state and only spin up or down. The best physicists can do is work out the chance of which state they will appear in when observed.
As if this wasn’t weird enough, particles can also be ‘entangled’ in pairs or groups. They become deeply linked to one another, so you can’t change one without the other changing as well. Albert Einstein called this “spooky action at a distance” because it works even if the particles are at opposite ends of the universe.
If you’re struggling with these ideas, you’re in good company. Richard Feynman shared the 1965 Nobel Prize in Physics for helping define how quantum physics work, but even he said: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” This didn’t stop Feynman from proposing the idea of building a quantum computer though.
Incredibly, Feynman told a lecture hall at the California Institute of Technology that it was time to reinvent the computer in 1981. That’s the same year IBM coined the phrase ‘personal computer’, or ‘PC’ for short. And it would still be another decade before these devices became everyday items.
But all computers – from those early IBMs to your modern-day MacBook – work by processing ‘bits’ of information. Each bit represents the value one or zero. This binary code forms the basis of all the calculations a computer can process. And the more bits a computer has, the harder the task it can handle.
A quantum computer, Feynman proposed, would use a quantum
bit, or ‘qubit’. These would exist in superposition, so they can hold both one and zero at the same time. If you were to quantum entangle two qubits, they could hold four values at once: 1-0, 0-1, 1-1 and 0-0. As the number of qubits grow, a quantum computer very quickly becomes more powerful than a conventional one, so it can process information in a fraction of the time.
While Feynman provided a blueprint for how the technology could work, actually building a quantum computer proved much harder. Qubits are made from individual atoms or subatomic particles. Just trying to control them risks making them lose their quantum properties. Just linking them together took years of work, with the first two-qubit computer appearing in 1998.
This all changed over 20 years ago, when superconducting circuits were pioneered in Japan. This involves cooling qubits to
-273 degrees Celsius using powerful fridges. Using this method, Intel has achieved 49 qubits, and IBM boasts 53. Google’s game-changing Sycamore processor also has 53 qubits, but the tech giant’s already built another with 72. A start-up with US$119.5 million in funding called Rigetti even says it’s working on a 128-qubit system.
But it’s harder to cool large objects than it is smaller ones, especially when you need them to be colder than the depths of space. So just as superconducting qubits have reached the size that they can achieve quantum supremacy, they may be about to outgrow the refrigerators they rely on.
One alternative is to use ions, any atom with an added electrical charge. These can be trapped using microchips that emit electric fields. Each microchip can then be used as a qubit. Crucially, these can work at room temperature, so solve the fridge problem. But trapped ions have only been tested in labs, and will take a long time to build on the industrial scale that’s needed. Microsoft, meanwhile, is experimenting with topological qubits, which would be less sensitive to temperature, but this involves splitting electrons. You could say this technology is in a quantum state of its own: it’s both making important breakthroughs and at the same time we’re only just beginning to understand it.