Hyper-Graph-Network Decoders for Block Codes
Authors: Eliya Nachmani, Lior Wolf
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results show that for a large number of algebraic block codes, from diverse families of codes (BCH, LDPC, Polar), the decoding obtained with our method outperforms the vanilla belief propagation method as well as other learning techniques from the literature. Applied to a wide variety of codes, our method outperforms the current learning based solutions, as well as the classical BP method, both for a finite number of iterations and at convergence of the message passing iterations. In order to evaluate our method, we train the proposed architecture with three classes of linear block codes: Low Density Parity Check (LDPC) codes [6], Polar codes [1] and Bose Chaudhuri Hocquenghem (BCH) codes [3]. The results are reported as bit error rates (BER) for different SNR values (d B). Fig. 3 shows the results for sample codes, and Tab. 1 lists results for more codes. To evaluate the contribution of the various components of our method, we ran an ablation analysis. |
| Researcher Affiliation | Collaboration | Eliya Nachmani and Lior Wolf Facebook AI Research and Tel Aviv University |
| Pseudocode | No | The paper describes algorithms through mathematical equations but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | All generator matrices and parity check matrices are taken from [9]. [9] refers to: Michael Helmling, Stefan Scholl, Florian Gensheimer, Tobias Dietz, Kira Kraft, Stefan Ruzika, and Norbert Wehn. Database of Channel Codes and ML Simulation Results. www.uni-kl.de/channel-codes, 2019. |
| Dataset Splits | No | The paper mentions "For validation, we use the generator matrix G" but does not specify a distinct dataset split (percentages, counts, or explicit separate sets) for validation beyond how data is generated within batches. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of "Adam optimizer [13]" but does not provide specific version numbers for Adam or any other software libraries or dependencies. |
| Experiment Setup | Yes | The learning rate was 1e 4 for all type of codes, and the Adam optimizer [13] is used for training. The decoding network has ten layers which simulates L = 5 iterations of a modified BP algorithm. In our experiments, the order of the Taylor series of arctanh is set to q = 1005. The network f has four layers with 32 neurons at each layer. The network g has two layer with 16 neurons at each layer. For BCH codes, we also tested a deeper configuration in which the network f has four layers with 128 neurons at each layer. |