Neural Algorithmic Reasoning Without Intermediate Supervision

Authors: Gleb Rodionov, Liudmila Prokhorenkova

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments show that our approach is competitive with the current state-of-the-art results relying on intermediate supervision. Moreover, for some of the problems, we achieve the best known performance: for instance, we get the F1 score 98.7% for the sorting, which significantly improves over the previously known winner with 95.2%.
Researcher Affiliation Industry Gleb Rodionov Yandex Research Moscow, Russia rodionovgleb@yandex-team.ru Liudmila Prokhorenkova Yandex Research Amsterdam, The Netherlands ostroumova-la@yandex-team.ru
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper; there are no specific repository links, explicit code release statements, or code in supplementary materials.
Open Datasets Yes Our work follows the setup of the recently proposed CLRS Algorithmic Reasoning Benchmark (CLRS) (Veliˇckovi c et al., 2022).
Dataset Splits Yes Validation size 16
Hardware Specification Yes Our models are trained on a single A100 GPU, requiring less than 1 hour to train.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'Triplet-GMPNN architecture' but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow, etc.).
Experiment Setup Yes Optimiser Adam Learning rate 0.001 Train steps count 10000 Evaluate each (steps) 50 Early-stopping patience (steps) 500 Batch size 32 Processor Triplet-GMPNN Hidden state size 128 Number of message passing steps per processor step 1