Turbo Autoencoder: Deep learning based channel codes for point-to-point communication channels

Authors: Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Turbo AE approaches state-of-the-art performance under canonical channels; (b) moreover, Turbo AE outperforms the state-of-the-art codes under non-canonical settings in terms of reliability. Turbo AE shows that the development of channel coding design can be automated via deep learning, with near-optimal performance.
Researcher Affiliation Collaboration Yihan Jiang ECE Department University of Washington Seattle, United States yij021@uw.edu Hyeji Kim Samsung AI Center Cambridge Cambridge, United Kingdom hkim1505@gmail.com Himanshu Asnani School of Technology and Computer Science Tata Institute of Fundamental Research Mumbai, India himanshu.asnani@tifr.res.in Sreeram Kannan ECE Department University of Washington Seattle, United States ksreeram@ee.washington.edu Sewoong Oh Allen School of Computer Science & Engineering University of Washington Seattle, United States sewoong@cs.washington.edu Pramod Viswanath ECE Department University of Illinois at Urbana Champaign Illinois, United States pramodv@illinois.edu
Pseudocode Yes Algorithm 1 Training Algorithm for Turbo AE
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository for the described methodology.
Open Datasets No The paper describes generating data based on channel models (AWGN, ATN, Markovian-AWGN) and refers to the Vienna 5G simulator [33][34] for benchmarks. It does not provide a specific public dataset with a URL, DOI, or a citation to an existing public data repository.
Dataset Splits No The paper does not provide specific percentages or sample counts for training, validation, or test dataset splits, as data is generated based on channel models rather than using a fixed dataset with pre-defined splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory specifications).
Software Dependencies No The paper mentions 'Vienna 5G simulator [33] [34]' and 'Adam' optimizer but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes The hyper-parameters are shown in Table 1. Loss Binary Cross-Entropy (BCE) Encoder 2 layers 1D-CNN, kernel size 5, 100 filters for each fi,θ(.) block Decoder 5 layers 1D-CNN, kernel size 5, 100 filters for each gφi,j(.) block Decoder Iterations 6 Info Feature Size F 5 Batch Size 500 when start, double when saturates for 20 epochs, till reaches 2000 Optimizer Adam with initial learning rate 0.0001 Training Schedule for Each Epoch Train encoder Tenc = 100 times, then train decoder Tdec = 500 times Block Length K 100 Number of Epochs M 800