You need to add a widget, row, or prebuilt layout before you’ll see anything here. 🙂

 

Long a “Holy Grail” of scientists and engineers, quantum computers that can be used in practical applications are steadily inching closer to becoming reality. 

Built on the mind-bending properties of quantum physics—how things behave at very, very small scales—quantum computers, in principle, can solve problems and perform tasks that would take centuries for classical computers to perform. 

How it works

Traditional computers store and process information using fundamental units known as bits (see overview). A single bit exists in only two states, either on (1) or off (0). Every complex task your computer or smartphone performs, from word processing to streaming videos, is built off the storage and rapid processing of trillions of bits in different configurations.    

The fundamental units of information in quantum computers are qubits. Like bits, they have distinct states, but they behave much differently than classical bits. Two principles of quantum mechanics are helpful to illustrate why. 

The first is superposition. At very small scales, particles and matter are described by probability waves. This means qubits can simultaneously exist in two states—both 0 and 1—simultaneously, whereas classical bits can only be 0 or 1 at all times. 

The second is entanglement. Not only do tiny particles exist in multiple states at once, but in certain configurations, their probability waves become linked (or entangled). Now, not only is each particle simultaneously in multiple states, but the whole group exists in multiple connected states that can be manipulated all at once.

Together, these properties allow quantum computers to carry out complex operations orders of magnitudes faster than traditional computers. Roughly speaking, the speed increases exponentially with the number of qubits—a Google prototype recently solved a math problem that would take a supercomputer 47 years using just 70 qubits. 

Check out a deeper technical dive into the mechanics of quantum computers here.

Different approaches

Scientists have used a variety of techniques to create working quantum computers. Each attempts to balance stability, coherence, practicality, and the ability to manufacture at scale.   

The most common approach relies on superconducting qubits. These are microscopic circuits that behave like artificial atoms. They’re relatively easy to manufacture but must be operated close to absolute zero (almost minus 460 degrees Fahrenheit). Both Google and IBM have demonstrated such devices. 

Other approaches try to make use of light particles (photons), particles suspended in electric fields (trapped ions), quantum dots, spin-based qubits, exotic states of matter known as Majorana particles, and more. See various advantages of each here

A key requirement among all types is that it must maintain quantum coherence. Most existing qubits lose their connected states—or decohere—in a millisecond or less. 

Current state and beyond

A number of established tech companies and newer start-ups have demonstrated operational quantum computers. As of early 2024, two groups (IBM and Atom Computing) had broken the 1,000-qubit milestone.

However, unlike classical computers, the sheer number of qubits is less important than maintaining coherence and lowering error rates. Researchers have developed a metric known as quantum volume (see definition), which attempts to encapsulate overall performance and allows comparison between different computing approaches. 

So far, quantum computers have been used to solve representative theoretical problems, and companies like Microsoft provide tools allowing companies to prepare for fault-tolerant quantum computing on the cloud.

https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/ 

Video: Quantum computers aren’t the next generation of supercomputers — they’re something else entirely. Before we can even begin to talk about their potential applications, we need to understand the fundamental physics that drives the theory of quantum computing.

Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog(opens a new tab) and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.

In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.

Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.

Quantized

A regular column in which top researchers explore the process of discovery. This month’s columnist, Scott Aaronson, is a professor of computer science at the University of Texas at Austin, specializing in quantum computing and computational complexity theory.

That would be fine if a business or technology reporter could truthfully tell readers, “Look, there’s all this deep quantum stuff under the hood, but all you need to understand is the bottom line: Physicists are on the verge of building faster computers that will revolutionize everything.”

The trouble is that quantum computers will not revolutionize everything.

Yes, they might someday solve a few specific problems in minutes that (we think) would take longer than the age of the universe on classical computers. But there are many other important problems for which most experts think quantum computers will help only modestly, if at all. Also, while Google and others recently made credible claims that they had achieved contrived quantum speedups, this was only for specific, esoteric benchmarks (ones that I helped develop(opens a new tab)). A quantum computer that’s big and reliable enough to outperform classical computers at practical applications like breaking cryptographic codes and simulating chemistry is likely still a long way off.

But how could a programmable computer be faster for only some problems? Do we know which ones? And what does a “big and reliable” quantum computer even mean in this context? To answer these questions we have to get into the deep stuff.

Let’s start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means “both at once,” so that a quantum bit, or qubit, is just a bit that can be “both 0 and 1 at the same time,” while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition — that is, at the same time, or in parallel.

This is what I’ve come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here it’s just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once — something almost all experts believe they won’t be able to do.

The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say you’ll just see and read a random answer. And if that’s all you wanted, you could’ve picked one yourself.

What superposition really means is “complex linear combination.” Here, we mean “complex” not in the sense of “complicated” but in the sense of a real plus an imaginary number, while “linear combination” means we add together different multiples of states. So a qubit is a bit that has a complex number called an amplitude attached to the possibility that it’s 0, and a different amplitude attached to the possibility that it’s 1. These amplitudes are closely related to probabilities, in that the further some outcome’s amplitude is from zero, the larger the chance of seeing that outcome; more precisely, the probability equals the distance squared.

But amplitudes are not probabilities. They follow different rules. For example, if some contributions to an amplitude are positive and others are negative, then the contributions can interfere destructively and cancel each other out, so that the amplitude is zero and the corresponding outcome is never observed; likewise, they can interfere constructively and increase the likelihood of a given outcome. The goal in devising an algorithm for a quantum computer is to choreograph a pattern of constructive and destructive interference so that for each wrong answer the contributions to its amplitude cancel each other out, whereas for the right answer the contributions reinforce each other. If, and only if, you can arrange that, you’ll see the right answer with a large probability when you look. The tricky part is to do this without knowing the answer in advance, and faster than you could do it with a classical computer.

Twenty-seven years ago, Shor showed how to do all this for the problem of factoring integers, which breaks the widely used cryptographic codes underlying much of online commerce. We now know how to do it for some other problems, too, but only by exploiting the special mathematical structures in those problems. It’s not just a matter of trying all possible answers at once.

Categories: Uncategorized