Single-Shot-Decoding with High Thresholds in LDPC Quantum Codes with Constant Encoding Rate


Date
Event
5th International Conference on Quantum Error Correction

SLIDES

It is believed that active quantum error correction will be an essential ingredient to build a scalable quantum computer. The currently favored scheme is the surface code due to its high decoding threshold and efficient decoding algorithm. However, it suffers from large overheads which are even more severe when parity check measurements are subject to errors and have to be repeated. Furthermore, the number of encoded qubits in the surface code does not grow with system size, leading to a sub-optimal use of the physical qubits. Finally, the decoding algorithm, while efficient, has non-trivial complexity and it is not clear whether it can be implemented in hardware that can keep up with the classical processing. We present a class of low-density-parity check (LDPC) quantum codes which fix all three of the concerns mentioned above. They were first proposed in [1] and called 4D hyperbolic codes, as their definition is based on four-dimensional, curved geometries. They have the remarkable property that the number of encoded qubits grows linearly with system size, while their distance grows polynomially with system size, i.e. d~n a with 0.1 < a < 0.3. This is remarkable since it was previously conjectured that such codes could not exist [1]. Their structure allows for decoders which can deal with erroneous syndrome measurements, a property called single-shot error correction [2] as well as local decoding schemes [3]. Although [1] analyzed the encoding rate and distance of this code family abstractly, it is a non-trivial task to actually construct them. There is no known efficient deterministic procedure for obtaining small examples. Only single examples of reasonable size had been obtained previously [4,5]. These previous examples were part of different code families, so that it was not possible to determine a threshold. We succeeded to construct several small examples by utilizing a combination of randomized search and algebraic tools. We analyze the performance of these codes under several different local decoding procedures via Monte Carlo simulations. The decoders all share the property that they can be executed in parallel in O(1) time. Under the phenomenological noise model and including syndrome errors we obtain a threshold of ~5% which to our knowledge is the highest threshold among all local decoding schemes.

[1] A. Lubotzky, A. Guth, Journal Of Mathematical Physics 55, 082202 (2014).

[2] H. Bombin, Physical Review X 5 (3), 031043 (2015).

[3] M. Hastings, QIC 14, 1187 (2014).

[4] V. Londe, A. Leverrier, arXiv:1712.08578 (2017).

[5] N. P. Breuckmann, PhD Thesis (2017).

[6] C. Wang, J. Harrington, J. Preskill, Annals of Physics 303 (1), 31-58 (2003).