Some Might Say All You Need Is Sum

Authors: Eran Rosenbluth, Jan Tönshoff, Martin Grohe

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, in Section 7 we experiment with synthetic data and observe that what we proved to be expressible is to an extent also learnable, and that in practice inexpressivity is manifested in a significantly higher error than implied in theory. All proofs, some of the lemmas, and extended illustration and analysis of the experimentation, are found in the full version1.
Researcher Affiliation Academia Eran Rosenbluth , Jan Toenshoff and Martin Grohe RWTH Aachen University {rosenbluth, toenshoff, grohe}@informatik.rwth-aachen.de
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes code for running the experiments is found at https://github.com/ toenshoff/Uniform Graph Learning
Open Datasets No This dataset consists of the star graphs {Gk,c} from Section 5.1, for k, c [1..1000]... This dataset consists of the graphs {Gk,c} from Section 5.2, for k, c [1..1000]. As training data, we vary k [1..100] and c [1..100]. We therefore train on 10K graphs in each experiment.
Dataset Splits No The paper mentions training data and test data, but does not explicitly specify a separate validation split or its details.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies used in the experiments.
Experiment Setup No Specific details concerning training and architecture, as well additional illustrations and extended analysis, can be found in the full version. The main paper mentions a "GNN architecture consisting of two GNN layers" but lacks concrete hyperparameters or training configurations.