QC — The strength and the weakness of Qubits in Quantum Computing
The concept of quantum parallelism is powerful and not hard to understand. We will explain it in this article. But, with some disappointment, creating a quantum algorithm to beat the classical one is hard. That is why we don’t have a large zoo of quantum algorithms yet. Here, we will spend some time discussing the challenges and look into some solutions. For those that are not familiar with quantum operators, please refer to Part 3 of our quantum computing series first.
The power of quantum parallelism can be summarized in the following matrix. We apply a unitary operation to manipulate qubit(s) in one step (in practice, polynomial steps).
Since the number of components in the computational basis increases exponentially with the number of qubits, the quantum parallelism offers a possibility of computing f(x) with all valid values of x simultaneously.
Prepare & consolidate superposition
Unfortunately, there are a few major issues that limit the types of problems that we can take advantage of. Its success depends on:
- Can we prepare the superposition state in polynomial time?
- Can we consolidate the superpositions so we can measure them in polynomial time also?
- Can we find the unitary operators to do that?
Let’s illustrate the concept further. We will focus on the big picture and therefore don’t bother if some details are missing. A quantum algorithm can be composed of three stages before making a measurement.
First, we prepare a superposition so we can take advantage of quantum parallelism. Once it is in a high dimensional space, we create a lot of elbow room for manipulation. Then, we consolidate the superposition before the measurement. Let’s get into more details.
The most common and effective way to prepare superposition is the Hadamard gate. For example, we apply n Hadamard gates in parallel to prepare n qubits in one single step:
The example below has n=3 and the superposition becomes:
Next, we need to refine the superposition such that the coefficients in the superposition are loaded with numbers that we need to solve the problem. To demonstrate quantum supremacy, we want to do it in polynomial time for these exponential number of coefficients.
This is the first major challenge and often not possible. We wrongly expect that we can implement a generic quantum algorithm that breaks the curse of exponential complexity easily. Often, scientists introduce Oracle functions to demonstrate quantum supremacy by simplifying or ignoring some part of the problem. For beginners, this turns to be a great disappointment once realize that such Oracle functions cannot be implemented effectively in a generic context, or there is no potential application for the demonstrated implementations. If you ask for more practical examples for the Oracle function, you may miss the reason why there is an Oracle function in the first place. But don’t be disappointed. The efforts of designing algorithms with Oracle functions do bear fruit. As we solve enough problems, we develop algorithms like Shor’s algorithm that demonstrates the real effectiveness of quantum computing. Oracle functions are the stepping stones that lead us to other important algorithms.
The root of the problem is very simple. If we treat the problem as a generic problem, we need to handle all the possibilities. We cannot assume the data have a specific structure. We cannot escape the curse of loading and examining each data. To achieve quantum supremacy, the data dealing with the problem must have a structure that we can load the needed information in polynomial time.
Shor’s algorithm solves the prime factorization problem in polynomial time and more importantly, it hacks the RSA encryption. We will discuss the details in later articles but we will present a big picture here. Prime factorization finds the factors of N that composed of two prime numbers. For example, the prime factors for 15 are 3 and 5. The fastest classical algorithm for prime factorization is in the order of
where n is the number of bits for N.
We can encode an exponentially amount of information in the coefficients of the computational basis:
But we want to load the needed coefficients in polynomial time. The hardest part of Shor’s algorithm is to find the period of the modulo function in the second register below. (A register is just a collection of qubits.)
For a = 7, this is what it looks like:
The modulo function is a periodic function with a repeated pattern. This creates a structure that we are looking for. In Shor’s algorithm, we can prepare a similar (approximated) variant of the qubits above in polynomial time! After applying the inverse of the Quantum Fourier Transform (IQFT) to the first register, we calculate the period of the modulo function. The circuit below demonstrates how we prepare the superposition. Then it is followed by the IQFT to compute the period.
The great news is both can be done in polynomial time. This is an exciting break in quantum computing research! No classical algorithm is known with the same efficiency. By exploiting the structure of the problem, we take advantage of quantum techniques to break the curse of the exponential complexity. In fact, the quantum algorithms before the Shor’s algorithm are usually considered as toy experiments. After Shor’s algorithm, it attracts investments beyond academic.
But remember, many problems can not find such a break. It is not that simple to develop algorithms with an exponential speedup over its classical counterpart.
In classical computing, results are stored in variables and the computer just read them. How do we retrieve information from quantum circuits? Recall that, nature does not provide means to access the amplitude (coefficient) of the superposition directly. Instead, we make measurements and derive the answer from the measured states. Let’s look into some simple possibilities.
Let’s say the calculated result should be the value 010. One possibility is that the algorithm pushes the amplitude α2 for the computation basis |010⟩ higher. So by repeating the calculation many times, the most measured qubit value will be the answer. i.e. 010. So, the algorithm needs to push up the amplitude of |010⟩, otherwise, we have to repeat the calculation and the measurement exponential number of times just to be sure what is the most likely measurements.
In other words, we need to consolidate the superposition. After pushing the amplitude of our answer much higher, we can find the answer by says repeating the calculation 1000 times only. Of course, if we are in bad luck, our measurements can still be wrong. But we can always verify the answer afterward. For many problems, finding a solution is hard but verifying a solution is much easier (that is the characters of NP problems).
In computational complexity theory, bounded-error quantum polynomial time (BQP) is defined as the class of problems that solved by quantum computing in polynomial time with a possibility of less than 1/3 that the answer is wrong in each run. This is the class of algorithms that quantum computers want to address.
Classical computer v.s. quantum computer
Classical computers build a basic set of operations like NOT, NAND and XOR to construct operations. In quantum computers, we perform linear algebra with operators. We can do it fast with an exponential amount of information, as long as the operators are unitary.
Since we use complex numbers in the operator, we can produce equations with close resembles with many existing equations.
For example, performing a Discrete Fourier Transform (DFT) below is simple in quantum computers.
This turns out to be important in quantum computing. For example, Shor’s algorithm depends on Quantum Fourier Transform to find the period of a function. Next, we will detail the quantum gates, the fundamental building block for quantum algorithms.