On dimensionality of feature vectors in MPNNs

Authors: César Bravo, Alexander Kozachinskiy, Cristobal Rojas

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally validate this theoretical result by demonstrating that our 1par MPNN architecture is capable of achieving perfect simulation of the WL algorithm simultaneously for all graphs in the benchmark dataset for graph kernels (Kriege et al., 2020), even if we use the same value of the parameter γ in all the iterations. We also explored how the minimum number of precision bits required to guarantee perfect simulation depends on the size of the graphs.
Researcher Affiliation Academia 1Instituto de Ingenier ıa Matem atica y Computacional, Universidad Cat olica de Chile 2Centro Nacional de Inteligencia Artificial, Chile 3Instituto Milenio Fundamentos de los Datos, Chile.
Pseudocode No The paper provides mathematical equations for the architecture but does not include a formal pseudocode or algorithm block.
Open Source Code Yes We have made available the code of all our experiments.
Open Datasets Yes We tested our one-dimensional architecture 1par MPNN on the benchmark dataset for graphs kernels introduced in (Kersting et al., 2016; Kriege et al., 2020), which consists of a collection of 26 different datasets with graphs from different domains including molecular biology, social networks, and computer vision.
Dataset Splits No The paper mentions using benchmark datasets but does not specify the training, validation, or test splits used for their experiments, nor does it cite a standard split for their specific use.
Hardware Specification No The paper does not mention any specific hardware used for running the experiments.
Software Dependencies No The paper mentions using a 'sigmoid activation function' but does not specify any software names with version numbers, such as programming languages or libraries.
Experiment Setup Yes In our experiments, we implemented our architecture 1par MPNN Mγ with sigmoid activation function and with γ(0) = γ(1) = γ(2) = . . . = γ for a randomly chosen γ, so that the same value of a parameter was used for all the iterations... All the computations were performed with 50 bits of precision.