Universal Invariant and Equivariant Graph Neural Networks

Authors: Nicolas Keriven, Gabriel Peyré

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section provides numerical illustrations of our findings on simple synthetic examples. The goal is to examine the impact of the tensorization orders ks and the width S. The code is available at https://github.com/nkeriven/univgnn. ... We consider graphs, represented using their adjacency matrices (i.e. 2-ways tensor, so that d = 2). The synthetic graphs are drawn uniformly among 5 graph topologies (complete graph, star, cycle, path or wheel) with edge weights drawn independently as the absolute value of a centered Gaussian variable. Since our approximation results are valid for several graph sizes simultaneously, both training and testing datasets contain 1.4 104 graphs, half with 5 nodes and half with 10 nodes. The training is performed by minimizing a square Euclidean loss (MSE) on the training dataset. The minimization is performed by stochastic gradient descent using the ADAM optimizer (Kingma and Ba, 2014). We consider two different regression tasks: (i) in the invariant case, the scalar to predict is the geodesic diameter of the graph, (ii) in the equivariant case, the vector to predict assigns to each node the length of the longest shortest-path emanating from it. ... Figure 3 shows that, on these two cases, when increasing the width S, the out-of-sample prediction error quickly stagnates (and sometime increasing too much S can slightly degrade performances by making the training harder). In sharp contrast, increasing the tensorization order k has a significant impact and lowers this optimal error value.
Researcher Affiliation Academia Nicolas Keriven École Normale Supérieure Paris, France nicolas.keriven@ens.fr Gabriel Peyré CNRS and École Normale Supérieure Paris, France gabriel.peyre@ens.fr
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/nkeriven/univgnn.
Open Datasets No The synthetic graphs are drawn uniformly among 5 graph topologies (complete graph, star, cycle, path or wheel) with edge weights drawn independently as the absolute value of a centered Gaussian variable. Since our approximation results are valid for several graph sizes simultaneously, both training and testing datasets contain 1.4 104 graphs, half with 5 nodes and half with 10 nodes.
Dataset Splits No Since our approximation results are valid for several graph sizes simultaneously, both training and testing datasets contain 1.4 104 graphs, half with 5 nodes and half with 10 nodes.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper mentions using the ADAM optimizer but does not specify software versions for programming languages, libraries, or frameworks used in the implementation.
Experiment Setup Yes The training is performed by minimizing a square Euclidean loss (MSE) on the training dataset. The minimization is performed by stochastic gradient descent using the ADAM optimizer (Kingma and Ba, 2014). ... The GNNs (1) are implemented with a fixed tensorization order ks = k {1, 2, 3} and ρ = ρsig.. Figure 3 shows MSE results after 150 epochs.