On Graph Neural Networks versus Graph-Augmented MLPs

Authors: Lei Chen, Zhengdao Chen, Joan Bruna

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental From the results in Table 1, we see that indeed, the number of equivalence classes induced by GA-MLP-A is smaller than that by GNNs, with the highest relative difference occurring at K = 2. From Table 2, we see that GIN significantly outperforms GA-MLPs in both training and testing on both graphs, consistent with the theoretical result in Proposition 6 that GNNs can count attributed walks while GA-MLPs cannot.
Researcher Affiliation Academia Lei Chen , Zhengdao Chen Courant Institute of Mathematical Sciences New York University, New York, NY {lc3909, zc1216}@nyu.edu Joan Bruna Courant Institute of Mathematical Sciences Center for Data Science New York University, New York, NY bruna@cims.nyu.edu
Pseudocode No The paper describes methods and theoretical arguments but does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code Yes Code available at https://github.com/leichen2018/GNN_vs_GAMLP.
Open Datasets Yes We take graphs from the Cora dataset (with node features removed) as well as generate a random regular graph (RRG) with 1000 nodes and the node degree being 6. Cora Citeseer Pubmed
Dataset Splits No The number of nodes for training and testing is split as 1000/1708 for the Cora graph and 300/700 for the random regular graph. Only training and testing splits are mentioned, no explicit validation split.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud resources) used to run the experiments.
Software Dependencies No The model is trained with the Adam optimizer (Kingma & Ba, 2014) with learning rate selected from {0.1, 0.02, 0.01, 0.005, 0.001}. We use instance normalization (Ulyanov et al., 2016). These mention software tools but do not provide specific version numbers required for reproducibility.
Experiment Setup Yes The number of hidden dimensions is searched in {8, 16, 32, 64, 256}. The model is trained with the Adam optimizer (Kingma & Ba, 2014) with learning rate selected from {0.1, 0.02, 0.01, 0.005, 0.001}. The highest order of operators is searched in {30, 60, 120}. The number of hidden dimensions is searched in {10, 20}.