In simple words, quantum computing is the use of quantum mechanics (also known as quantum physics)—or the description of things at the atomic and subatomic levels—to dramatically increase computer processing power and speed.
Why Is Quantum Computing Important?
What would take a regular computer literally millions of years to accomplish takes a quantum computer just a couple of seconds. Thus, quantum computing’s core benefit is the acceleration of computing processes. In other words, it makes regular computing way more powerful. The applications of faster computing are far and wide and range from much better cybersecurity to significantly enhanced customer experiences to anything else that requires a lot of computing power to quickly produce a certain result or answer.
Quantum computing also has incredible importance for the potential of machine learning and artificial intelligence. Since quantum computers can run through endless scenarios at an incredibly fast pace, they have the potential to learn how to become essentially the best possible versions of themselves for whatever central mission or task they’ve been assigned.
How Do Quantum Computers Work?
Where classical computers use transistors, which are either 1 or 0, to process information, quantum computers use qubits, which can be 1 or 0 at the same time. Linking together more transistors only increases power linearly, but linking qubits increases quantum computing power exponentially. That’s the power of a qubit, which is the basic unit of quantum information and critical to how quantum computers work.
That said, the best way to think about the value of quantum computing and how it works, in plain English, is to think of a coin. Every coin has two sides, or values: heads or tails. However, when a coin is flipped, it spends some time in the air spinning between both values (heads and tails). A regular computer can only read heads or tails and hence can’t do anything with the information the coin is providing when it’s spinning in the air. A quantum computer, though, can actually read this spinning state as a value in itself in which the coin is both heads and tails at the same time.
This has powerful implications. Think of a four-digit PIN that only uses ones and zeros, for example. To determine this PIN, a regular computer, since it can only read ones and zeros, has to go through all the possibilities of each of the four number slots (i.e., 1 or 0) to start eliminating possibilities and finally arrive at the correct one. But a quantum computer, since it can overlap ones and zeros in the same space, can actually go through all the possibilities at once.
Limitations and Challenges of Quantum Computing
In the nearly 40 years since physicist Richard Feynman first proposed the idea of quantum computing, computer scientists have made enormous progress in figuring out which problems quantum computing would be good for. However, there’s still a long way to go until quantum computing is understood and developed enough to actually be applied to the above-mentioned use cases of cybersecurity and machine learning.
Also, even for simpler things like playing chess, scheduling airline flights, and proving theorems, quantum computers—in their current state at least—would suffer from many of the same algorithmic limitations as classical computers.
These limitations are in addition to the practical difficulties of actually building quantum computers, such as decoherence (unwanted interaction between a quantum computer and its environment, which introduces errors).
That said, quantum computing is undoubtedly a field of the future within computer science—a capability that many of the world’s top computer scientists are diligently developing so that our world can benefit from a huge leap in computer processing power. It’s no longer a matter of “why” or “what,” but “when.”