Logical characterizations of recurrent graph neural networks with reals and floats
Authors: Veeti Ahvonen, Damian Heiman, Antti Kuusisto, Carsten Lutz
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this article, we give exact logical characterizations of recurrent GNNs in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. This is a theoretical paper and all results listed in the abstract and intro are ultimately justified via proofs, occasionally with proof sketches with full proofs in the appendix. |
| Researcher Affiliation | Academia | 1Tampere University, 2Leipzig University, 3Sca DS.AI, Dresden/Leipzig 1firstname.lastname@tuni.fi, 2,3clu@informatik.uni-leipzig.de |
| Pseudocode | No | The paper describes theoretical concepts and proofs but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper is theoretical and does not describe a methodology that involves open-source code for its implementation. The NeurIPS checklist confirms: 'The paper does not include experiments requiring code.' |
| Open Datasets | No | The paper is purely theoretical and does not involve the use of any datasets for training or evaluation. |
| Dataset Splits | No | The paper is purely theoretical and does not involve experimental data splits for training, validation, or testing. |
| Hardware Specification | No | The paper is purely theoretical and does not include experiments, thus no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not describe an implementation that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is purely theoretical and does not include experimental setup details such as hyperparameters or training configurations. |