Expressive Power of Invariant and Equivariant Graph Neural Networks
Authors: Waiss Azizian, marc lelarge
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate our results on the Quadratic Assignment Problem (a NP-Hard combinatorial problem) by showing that FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms (based on spectral, SDP or other GNNs architectures). On a practical side, we also implement masked tensors to handle batches of graphs of varying sizes. ... A Py Torch implementation of the code necessary to reproduce the results is available at https://github.com/mlelarge/graph_neural_net |
| Researcher Affiliation | Academia | Waïss Azizian ENS, PSL University, Paris, France waiss.azizian@ens.fr Marc Lelarge INRIA & ENS, PSL University, Paris, France marc.lelarge@ens.fr |
| Pseudocode | No | The paper describes algorithms and methods in prose but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | A Py Torch implementation of the code necessary to reproduce the results is available at https://github.com/mlelarge/graph_neural_net |
| Open Datasets | No | The paper refers to 'Erd os Rényi model' and 'random regular graphs' and cites 'Feizi et al. (2016)' for an error model, but does not provide specific links, DOIs, repositories, or formal citations (including authors and year in brackets/parentheses) for a publicly available or open dataset used in their experiments. It describes how they generated the data for their experiments ('the dataset was made of 20000 graphs for the train set, 1000 for the validation set and 1000 for the test set. For the experiment with Erd os Rényi random graphs, we consider G1 to be a random Erd os Rényi graph...'). |
| Dataset Splits | Yes | For each experiment, the dataset was made of 20000 graphs for the train set, 1000 for the validation set and 1000 for the test set. |
| Hardware Specification | No | The paper mentions 'highly parallelizable on GPUs' and acknowledges 'Google Cloud Platform research credits and NVIDIA for a NVIDIA GPU Grant'. However, it does not provide specific GPU models (e.g., NVIDIA A100), CPU models, or detailed specifications of the hardware used for experiments. |
| Software Dependencies | Yes | A Py Torch implementation of the code necessary to reproduce the results is available at https://github.com/mlelarge/graph_neural_net ... Thanks to the newest improvements of Py Torch (Paszke et al., 2019), Masked Tensors act as a subclass of fundamental Tensor class. ... We trained for 25 epochs with batches of size 32, a learning rate of 1e-4 and Adam optimizer. |
| Experiment Setup | Yes | We used 2-FGNNE with 2 layers, each MLP having depth 3 and hidden states of size 64. We trained for 25 epochs with batches of size 32, a learning rate of 1e-4 and Adam optimizer. |