DeepPolar: Inventing Nonlinear Large-Kernel Polar Codes via Deep Learning

Authors: S Ashwin Hebbar, Sravan Kumar Ankireddy, Hyeji Kim, Sewoong Oh, Pramod Viswanath

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results demonstrate that these data-driven codes effectively leverage the benefits of a larger kernel size, resulting in enhanced reliability when compared to both existing neural codes and conventional Polar codes. Source code is available at this link. and Through extensive empirical studies, we find that a kernel size l= n is most effective in balancing bias and variance, enabling it to achieve significantly lower bit error rates over the baseline Polar and RM codes, as well as the KO(8, 2) code, SOTA at this block length and rate.
Researcher Affiliation Academia 1Princeton University 2University of Texas at Austin 3University of Washington. Correspondence to: Ashwin Hebbar <hebbar@princeton.edu>.
Pseudocode Yes Algorithm 1 Training algorithm for DEEPPOLAR (256,37,ℓ=16)
Open Source Code Yes Source code is available at this link. and The complete source code is provided at: https://www.github.com/hebbarashwin/deeppolar
Open Datasets No We generate synthetic input data for the encoder by randomly sampling from a boolean hypercube i.e, {0, 1}k.
Dataset Splits No The paper mentions data generation and training, but does not explicitly specify training/validation/test dataset splits or their sizes, only mentions generating synthetic input data for the encoder.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions) used for replication.
Experiment Setup Yes Table 1. Hyperparameters used in model training for DEEPPOLAR (256,37,ℓ= 16) is provided in section E.4, listing values for Batch size, Encoder training SNR, Decoder training SNR, Total epochs, Encoder steps per epoch, Decoder steps per epoch, Encoder learning rate, Decoder learning rate, and Optimizer.