Learning Linear Block Error Correction Codes

Authors: Yoni Choukroun, Lior Wolf

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate our method, we train the proposed architecture with four classes of linear codes: Low-Density Parity Check (LDPC) codes (Gallager, 1962), Polar codes (Arikan, 2008), Reed Solomon codes (Reed & Solomon, 1960) and Bose Chaudhuri Hocquenghem (BCH) codes (Bose & Ray-Chaudhuri, 1960). All the parity check matrices are taken from (Helmling et al., 2019).
Researcher Affiliation Academia 1The Blavatnik School of Computer Science, Tel Aviv University.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/yoni Lc/E2E_DC_ECCT.
Open Datasets Yes All the parity check matrices are taken from (Helmling et al., 2019).
Dataset Splits No The hyperparameter search was performed using a validation set as follows. The paper mentions using a validation set and testing, but does not provide specific percentages or sample counts for train/validation/test splits.
Hardware Specification Yes Training and experiments are performed on a 12GB Ge Force RTX 2080 Ti GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes The Adam optimizer (Kingma & Ba, 2014) is used with 1024 samples per minibatch, for 1K epochs, with 1K minibatches per epoch. We initialized the learning rate to 10 4 coupled with a cosine decay scheduler down to 10 6 at the end of the training.