Denoising Diffusion Error Correction Codes

Authors: Yoni Choukroun, Lior Wolf

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate our method, we train the proposed architecture with three classes of linear block codes: Low-Density Parity Check (LDPC) codes (Gallager, 1962), Polar codes (Arikan, 2008) and Bose Chaudhuri Hocquenghem (BCH) codes (Bose & Ray-Chaudhuri, 1960). The results are reported in Tab. 1, where we present the negative natural logarithm of the BER.
Researcher Affiliation Academia Yoni Choukroun The Blavatnik School of Computer Science Tel Aviv University choukroun.yoni@gmail.com Lior Wolf The Blavatnik School of Computer Science Tel Aviv University wolf@cs.tau.ac.il
Pseudocode Yes Algorithm 1: DDECC training procedure. Algorithm 2: DDECC sampling procedure
Open Source Code Yes Our code is attached as supplementary material.
Open Datasets Yes All parity check matrices are taken from Helmling et al. (2019). (Full citation in references: Michael Helmling, Stefan Scholl, Florian Gensheimer, Tobias Dietz, Kira Kraft, Stefan Ruzika, and Norbert Wehn. Database of Channel Codes and ML Simulation Results. www.uni-kl.de/channel-codes, 2019.)
Dataset Splits No The paper specifies training epochs, mini-batch size, and testing procedure but does not explicitly provide details about a validation dataset split or the percentages/counts for train/validation/test splits.
Hardware Specification Yes Training and experiments were performed on a 12GB Titan V GPU.
Software Dependencies No The paper mentions using the Adam optimizer and the AFF3CT software for comparison purposes, but it does not provide specific version numbers for these or other key software dependencies like programming languages or deep learning frameworks.
Experiment Setup Yes The Adam optimizer (Kingma & Ba, 2014) is used with 128 samples per mini-batch, for 2000 epochs, with 1000 mini-batches per epoch. The noise scheduling is constant and set to βt = 0.01, t. We initialized the learning rate to 10^-4 coupled with a cosine decay scheduler down to 5 * 10^-6 at the end of training.