Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Relative Phase Equivariant Deep Neural Systems for Physical Layer Communications
Authors: Arwin Gansekoele, Sandjai Bhulai, Mark Hoogendoorn, Rob van der Mei
TMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our approach empirically and demonstrate that including relative phase equivariance achieves a better trade-off between the number of parameters and the performance of the model. For our experiments, we implemented the model in Figure 2. We train each model for 150 epochs with the Adam W (Kingma & Ba, 2015; Loshchilov & Hutter, 2017) optimizer using an initial learning rate of 1e 3. We use a stepwise scheduler that reduces the learning rate by a factor of 10 in the epochs 100 and 125. We found that this reduction helps stabilize the training. We found that this approach overall gives models sufficient time to converge. All experiments were performed on an A100 40GB GPU and repeated 10 times. After training our models, we evaluate the model based on the bit error rate (BER). |
| Researcher Affiliation | Academia | Arwin Gansekoele EMAIL Stochastics Department Centrum Wiskunde & Informatica, Amsterdam Sandjai Bhulai EMAIL Department of Mathematics Vrije Universiteit Amsterdam Mark Hoogendoorn EMAIL Department of Computer Science Vrije Universiteit Amsterdam Rob van der Mei EMAIL Stochastics Department Centrum Wiskunde & Informatica, Amsterdam |
| Pseudocode | No | The paper describes methods and operations in text and through diagrams (Figure 1 and Figure 2), but does not include any structured pseudocode or algorithm blocks. The proof in the appendix is mathematical, not algorithmic. |
| Open Source Code | Yes | The code is available under: https://github.com/awgansekoele/relative-phase-equivariant-deep-neural-systems.git |
| Open Datasets | No | The paper uses channel models (Urban Macro (UMa), 3GPP-compliant TDL-A to C and CDL-A to E) to simulate data for its experiments. It does not use pre-existing, publicly available datasets in the traditional sense, but rather generates data on-the-fly based on these models. "Frequently, channel models are used to simulate fading coefficients h to train deep neural receivers. Many channel models exist, some realistic enough to validate practical systems directly (3GPP, 2006). Assume that we sample channel coefficients from a Rician fading model and sample a different initial phase θ U( π, π) to ensure proper simulation." Therefore, no specific access information for a public dataset is provided. |
| Dataset Splits | No | The paper mentions training for 150 epochs and evaluating based on bit error rate (BER) by simulating until 500 batches of blocks were processed or 5,000 blocks with errors were detected. However, it does not specify explicit splits for a dataset into training, validation, or test sets, as data appears to be generated on demand during simulations rather than pre-split. |
| Hardware Specification | Yes | All experiments were performed on an A100 40GB GPU |
| Software Dependencies | No | We used Sionna (Hoydis et al., 2022) to implement and evaluate our approach. Sionna provides Tensor Flow implementations of physical layer components. The paper mentions Sionna and TensorFlow but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We train each model for 150 epochs with the Adam W (Kingma & Ba, 2015; Loshchilov & Hutter, 2017) optimizer using an initial learning rate of 1e 3. We use a stepwise scheduler that reduces the learning rate by a factor of 10 in the epochs 100 and 125. |