Doubly Residual Neural Decoder: Towards Low-Complexity High-Performance Channel Decoding

Authors: Siyu Liao, Chunhua Deng, Miao Yin, Bo Yuan8574-8582

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiment results show that on different types of channel codes, our DRN decoder consistently outperform the state-of-the-art decoders in terms of decoding performance, model sizes and computational cost. Experiment In this section, we compare DRN decoder with the traditional BP and the state-of-the-art NBP and HGN decoders in terms of decoding performance (BER), model size and computational cost.
Researcher Affiliation Academia Siyu Liao, Chunhua Deng, Miao Yin, Bo Yuan Department of Electrical and Computer Engineering, Rutgers University siyu.liao@rutgers.edu, chunhua.deng@rutgers.edu, miao.yin@rutgers.edu, bo.yuan@soe.rutgers.edu
Pseudocode No No structured pseudocode or algorithm blocks (e.g., explicitly labeled 'Pseudocode' or 'Algorithm' sections) were found.
Open Source Code No The paper does not provide any specific repository links or explicit statements about code availability.
Open Datasets Yes The parity check matrices are adopted from (Helmling et al. 2019). (Helmling, M.; Scholl, S.; Gensheimer, F.; Dietz, T.; Kraft, K.; Ruzika, S.; and Wehn, N. 2019. Database of Channel Codes and ML Simulation Results. www.uni-kl.de/channelcodes. (last accessed on 2/21/2021).)
Dataset Splits No The paper mentions 'The training samples are generated on the fly and testing samples are generated till at least 100 error samples detected at each SNR setting.' but does not provide specific training/validation/test dataset splits (percentages, sample counts, or explicit fixed dataset partitions).
Hardware Specification Yes Our experiment environment is Ubuntu 16.04 with 256GB random access memory (RAM), Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and Nvidia-V100 GPU.
Software Dependencies No The paper mentions 'Ubuntu 16.04' as the operating system and 'RMSprop optimizer' but does not specify versions for key ancillary software components or libraries (e.g., deep learning frameworks).
Experiment Setup Yes The training batch size is 384, so there are 64 samples generated at each SNR value. We use the RMSprop optimizer (Hinton, Srivastava, and Swersky 2012) with learning rate 0.001 and run 20,000 iterations.