Deep Quantum Error Correction

Authors: Yoni Choukroun, Lior Wolf

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on various Toric code lengths, considering the two common noise models: independent and depolarization. In the experiments, we employ code lengths similar to those for which the existing end-to-end neural decoders were tested, i.e., 2 < L 10 (Varsamopoulos, Criger, and Bertels 2017; Torlai and Melko 2017; Krastanov and Jiang 2017; Chamberland and Ronagh 2018; Andreasson et al. 2019; Wagner, Kampermann, and Bruß 2020).
Researcher Affiliation Academia The Blavatnik School of Computer Science Tel Aviv University choukroun.yoni@gmail.com, wolf@cs.tau.ac.il
Pseudocode No The paper describes the architecture and training procedure in the "Method" section and provides an illustration in Figure 2, but it does not include any formal pseudocode blocks or algorithms.
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for their proposed method is publicly available. It mentions implementations taken from other works (e.g., "The implementation of the Toric codes is taken from (Krastanov and Jiang 2017)").
Open Datasets No The paper states, "The training is performed by randomly sampling noise in the physical error rate testing range." This indicates data is generated for training rather than using a pre-existing public dataset, and no access information for the generated data is provided.
Dataset Splits No The paper states, "The number of testing samples is set to 10^6," but does not specify how the generated data is split into training, validation, and test sets, nor does it provide percentages or counts for each split that would enable reproduction of data partitioning.
Hardware Specification Yes Training and experiments were performed on a 12GB Titan V GPU.
Software Dependencies No The paper mentions using the "Adam optimizer (Kingma and Ba 2014)" and the "STIM (Gidney 2021) simulator." While it names tools, it does not provide specific version numbers for general software dependencies (e.g., Python, PyTorch/TensorFlow).
Experiment Setup Yes The Adam optimizer (Kingma and Ba 2014) is used with 512 samples per minibatch, for 200 to 800 epochs depending on the code length, with 5000 minibatches per epoch. The default weight parameters are λg = 0.5, λLER = 1, λBER = 0.5. The default architecture is N = 6, d = 128. We initialized the learning rate to 5 10^4 coupled with a cosine decay scheduler down to 5 10^7 at the end of training.