If someone asked you to picture a quantum computer, what would you see in your mind?
Maybe you see a normal computer– just bigger, with some mysterious physics magic going on inside? Forget laptops or desktops. Forget computer server farms. A quantum computer is fundamentally different in both the way it looks, and ,more importantly, in the way it processes information.
There are currently several ways to build a quantum computer. But let’s start by describing one of the leading designs to help explain how it works.
Imagine a lightbulb filament, hanging upside down, but it’s the most complicated light you’ve ever seen. Instead of one slender twist of wire, it has organized silvery swarms of them, neatly braided around a core. They are arranged in layers that narrow as you move down. Golden plates separate the structure into sections.
The outer part of this vessel is called the chandelier. It’s a supercharged refrigerator that uses a special liquified helium mix to cool the computer’s quantum chip down to near absolute zero. That’s the coldest temperature theoretically possible.
At such low temperatures, the tiny superconducting circuits in the chip take on their quantum properties. And it’s those properties, as we’ll soon see, that could be harnessed to perform computational tasks that would be practically impossible on a classical computer.
Traditional computer processors work in binary—the billions of transistors that handle information on your laptop or smartphone are either on (1) or they’re off (0). Using a series of circuits, called “gates,” computers perform logical operations based on the state of those switches.
Classical computers are designed to follow specific inflexible rules. This makes them extremely reliable, but it also makes them ill-suited for solving certain kinds of problems—in particular, problems where you’re trying to find a needle in a haystack.
This is where quantum computers shine.
If you think of a computer solving a problem as a mouse running through a maze, a classical computer finds its way through by trying every path until it reaches the end.
What if, instead of solving the maze through trial and error, you could consider all possible routes simultaneously?
Quantum computers do this by substituting the binary “bits” of classical computing with something called “qubits.” Qubits operate according to the mysterious laws of quantum mechanics: the theory that physics works differently at the atomic and subatomic scale.
The classic way to demonstrate quantum mechanics is by shining a light through a barrier with two slits. Some light goes through the top slit, some the bottom, and the light waves knock into each other to create an interference pattern.
But now dim the light until you’re firing individual photons one by one—elementary particles that comprise light. Logically, each photon has to travel through a single slit, and they’ve got nothing to interfere with. But somehow, you still end up with an interference pattern.
Here’s what happens according to quantum mechanics: Until you detect them on the screen, each photon exists in a state called “superposition.” It’s as though it’s traveling all possible paths at once. That is, until the superposition state “collapses” under observation to reveal a single point on the screen.
Qubits use this ability to do very efficient calculations.
For the maze example, the superposition state would contain all the possible routes. And then you’d have to collapse the state of superposition to reveal the likeliest path to the cheese.
Just like you add more transistors to extend the capabilities of your classical computer, you add more qubits to create a more powerful quantum computer.
Thanks to a quantum mechanical property called “entanglement,” scientists can push multiple qubits into the same state, even if the qubits aren’t in contact with each other. And while individual qubits exist in a superposition of two states, this increases exponentially as you entangle more qubits with each other. So a two-qubit system stores 4 possible values, a 20-qubit system more than a million.
So what does that mean for computing power? It helps to think about applying quantum computing to a real world problem: the one of prime numbers.
A prime number is a natural number greater than 1 that can only be divided evenly by itself or 1.
While it’s easy to multiply small numbers into giant ones, it’s much harder to go the reverse direction; you can’t just look at a number and tell its factors. This is the basis for one of the most popular forms of data encryption, called RSA.
You can only decrypt RSA security by factoring the product of two prime numbers. Each prime factor is typically hundreds of digits long, and they serve as unique keys to a problem that’s effectively unsolvable without knowing the answers in advance.
In 1995, M.I.T. mathematician Peter Shor, then at AT&T Bell Laboratories, devised a novel algorithm for factoring prime numbers whatever the size. One day, a quantum computer could use its computational power, and Shor’s algorithm, to hack everything from your bank records to your personal files.
In 2001, IBM made a quantum computer with seven qubits to demonstrate Shor’s algorithm. For qubits, they used atomic nuclei, which have two different spin states that can be controlled through radio frequency pulses.
This wasn’t a great way to make a quantum computer, because it’s very hard to scale up. But it did manage to run Shor’s algorithm and factor 15 into 3 and 5. Hardly an impressive calculation, but still a major achievement in simply proving the algorithm works in practice.
Even now, experts are still trying to get quantum computers to work well enough to best classical supercomputers.
That remains extremely challenging, mostly because quantum states are fragile. It’s hard to completely stop qubits from interacting with their outside environment, even with precise lasers in supercooled or vacuum chambers.
Any noise in the system leads to a state called “decoherence,” where superposition breaks down and the computer loses information.
A small amount of error is natural in quantum computing, because we’re dealing in probabilities rather than the strict rules of binary. But decoherence often introduces so much noise that it obscures the result.
When one qubit goes into a state of decoherence, the entanglement that enables the entire system breaks down.
So how do you fix this? The answer is called error correction–and it can happen in a few ways.
Error Correction #1: A fully error-corrected quantum computer could handle common errors like “bit flips,” where a qubit suddenly changes to the wrong state.
To do this you would need to build a quantum computer with a few so-called “logical” qubits that actually do the math, and a bunch of standard qubits that correct for errors.
It would take a lot of error-correcting qubits—maybe 100 or so per logical qubit–to make the system work. But the end result would be an extremely reliable and generally useful quantum computer.
Error Correction #2: Other experts are trying to find clever ways to see through the noise generated by different errors. They are trying to build what they call “Noisy intermediate-scale quantum computers” using another set of algorithms.
That may work in some cases, but probably not across the board.
Error Correction #3: Another tactic is to find a new qubit source that isn’t as susceptible to noise, such as “topological particles” that are better at retaining information. But some of these exotic particles (or quasi-particles) are purely hypothetical, so this technology could be years or decades off.
Because of these difficulties, quantum computing has advanced slowly, though there have been some significant achievements.
In 2019, Google used a 54-qubit quantum computer named “Sycamore” to do an incredibly complex (if useless) simulation in under 4 minutes—running a quantum random number generator a million times to sample the likelihood of different results.
Sycamore works very differently from the quantum computer that IBM built to demonstrate Shor’s algorithm. Sycamore takes superconducting circuits and cools them to such low temperatures that the electrical current starts to behave like a quantum mechanical system. At present, this is one of the leading methods for building a quantum computer, alongside trapping ions in electric fields, where different energy levels similarly represent different qubit states.
Sycamore was a major breakthrough, though many engineers disagree exactly how major. Google said it was the first demonstration of so-called quantum advantage: achieving a task that would have been impossible for a classical computer.
It said the world’s best supercomputer would have needed 10,000 years to do the same task. IBM has disputed that claim.
At least for now, serious quantum computers are a ways off. But with billions of dollars of investment from governments and the world’s biggest companies, the race for quantum computing capabilities is well underway. The real question is: how will quantum computing change what a “computer” actually means to us. How will it change how our electronically connected world works? And when?