Suppose we have a noisy quantum computer and we want to perform a long computation. We could encode the data using the seven-qubit code, for instance, and perform the computation fault-tolerantly. The seven-qubit code corrects a single arbitrary error, so in order for the computation to fail now, two errors need to happen in the time between error correction steps. If the original error rate per step was p, the new effective error rate per step is C p2 for some constant C. Before, we could perform about 1/p operations before the computer is likely to fail; now we can perform about 1 / (C p2). If p is small, this can be quite a bit more. The constant C will depend on how many operations we do between error corrections and on how many operations are required to perform an error correction (since new errors can occur while we are attempting to fix old ones).
But suppose we want to do an even longer calculation. What options do we have? We could use a different code. However, that code might require more effort to correct errors, so even if it can correct more errors, it might not be an improvement. A better choice is to encode the data a second time using the seven-qubit code. Now each logical qubit in the computer is encoded using 49 physical qubits. In order to have an error on the data encoded at level 2, there need to be two level 1 errors in the time between error correction steps. The error rate now goes from C p2 to C (C p2)2, or C3 p4.
If even this error rate is not enough, we can add a third or fourth level of concatenation. In general, if we use L levels of concatenation, the effective error rate will be (C p)(2^L) / C. As long as p < 1/C, the effective error rate per step will rapidly go to 0 as we add levels. 1/C is thus the threshold error rate below which arbitrarily long quantum computations are possible.
One nice feature of concatenated codes is that we do not need very many levels of concatenation to get a big reduction in the error rate. Suppose we want to do a calculation requiring T steps. We should reduce the error rate to about 1/T in order to have a good chance of finishing the calculation without an error. Since the error rate is a double exponential in the number of levels, we only need about log(log T) levels to get this error rate.
Of course, the number of qubits we need goes up exponentially with the number of levels (there are 7L qubits per block). For short calculations, we can probably find a smaller code that will work. For long calculations, the exponential kills one of the logarithms, but that still means we only require (log T)k qubits per block, which is an acceptably slow rise in overhead.
In order to make the best use of this scheme, we should try to make C as small as possible. This is largely done by optimizing the error-correction step. Depending on how detailed an analysis you are willing to do and on how good an optimization you have done, you can get a number of different possible results. Right now, it appears the threshold can be brought above 10-4, possibly even above 10-3. This means that if we can make a quantum computer which has an error once per ten thousand steps or so, we can use it to do arbitrarily long quantum computations. While this is a challenging technical problem, it is by no means an inconceivable goal.
Back to Daniel Gottesman's home page
October 29, 1997