Robust Non-Linear Feedback Coding via Power-Constrained Deep Learning
Authors: Junghoon Kim, Taejoon Kim, David Love, Christopher Brinton
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments demonstrate that our scheme outperforms state-of-the-art feedback codes by wide margins over practical forward and feedback noise regimes, and provide information-theoretic insights on the behavior of our non-linear codes. Moreover, we observe that, in a long blocklength regime, canonical error correction codes are still preferable to feedback codes when the feedback noise becomes high. Our code is available at https://anonymous.4open.science/r/RCode1. |
| Researcher Affiliation | Academia | 1Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA 2Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, USA. |
| Pseudocode | Yes | Algorithm 1 Training for the proposed RNN autoencoder-based architecture |
| Open Source Code | Yes | Our code is available at https://anonymous.4open.science/r/RCode1. |
| Open Datasets | No | We consider a canonical point-to-point AWGN communication channel with noisy feedback as shown in Figure 1. We assume that the transmission occurs over N channel uses (timesteps). Let k {1, ..., N} denote the index of channel use and x[k] R represent the transmit signal at time k. At time k, the receiver receives the signal y[k] = x[k] + n1[k] R, k = 1, ..., N, (1) where n1[k] N(0, σ2 1) is Gaussian noise for the forward channel. ... where n2[k] N(0, σ2 2) is the feedback noise. The number of training data is J = 107, the batch size is Nbatch = 2.5 104, and the number of epochs is Nepoch = 100. |
| Dataset Splits | No | The paper mentions using a 'training data' size (J=10^7) and 'batch size' (Nbatch = 2.5 * 10^4) for training, but it does not specify any distinct validation dataset splits (e.g., percentages or counts for a validation set) or cross-validation setup for reproduction. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for experiments are mentioned in the paper. |
| Software Dependencies | No | Appendix B mentions using the Adam optimizer and GRUs, but does not specify version numbers for any software dependencies like deep learning frameworks (e.g., PyTorch, TensorFlow) or Python. |
| Experiment Setup | Yes | The number of training data is J = 107, the batch size is Nbatch = 2.5 104, and the number of epochs is Nepoch = 100. We use the Adam optimizer and a decaying learning rate, where the initial rate is 0.01 and the decaying ratio is γ = 0.95 applied for every epoch. We also use gradient clipping for training, where the gradients are clipped when the norm of gradients is larger than 1. We adopt two layers of uni-directional GRUs at the encoder and two layers of bi-directional GRUs at the decoder, with Nneurons = 50 neurons at each GRU. We initialize each neuron in GRUs with U( 1/ Nneurons, 1/ Nneurons), and all the power weights and attention weights to 1. We train our neural network model under particular forward/feedback noise powers and conduct inference in the same noise environment. |