Convergence of Invariant Graph Networks

Authors: Chen Cai, Yusu Wang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we perform experiments on various graphon models to verify our statements. We experiment 2-IGN on three graphon models of increasing complexity: Erdoes Renyi graph with p = 0.1, stochastic block model of 2 blocks of equal size and probability matrix [[0.1, 0.25], [0.25, 0.4]], a Lipschitz graphon model with W(u, v) = u+v+1 4 , and a piecewise Lipschitz graphon with W(u, v) = u% 1 3 +1 4 where % is modulo operation. Similar to (Keriven et al., 2020), we consider untrained IGN with random weights to assess how convergence depends on the choice of architecture rather than learning. We use a 5-layer IGN with hidden dimension 16. We take graphs of different sizes as input and plot the error in terms of the norm of the output difference. The results are plotted in Figure 3.
Researcher Affiliation Academia Chen Cai 1 Yusu Wang 1 1University of California San Diego, San Diego, USA. Correspondence to: Chen Cai <c1cai@ucsd.edu>.
Pseudocode No The paper contains mathematical formulations and figures but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code No Chen Cai would like to thank ... Hy Truong Son for providing the pytorch implementation of IGN.
Open Datasets No We experiment 2-IGN on three graphon models of increasing complexity: Erdoes Renyi graph with p = 0.1, stochastic block model of 2 blocks of equal size and probability matrix [[0.1, 0.25], [0.25, 0.4]], a Lipschitz graphon model with W(u, v) = u+v+1 4 , and a piecewise Lipschitz graphon with W(u, v) = u% 1 3 +1 4 where % is modulo operation.
Dataset Splits No We take graphs of different sizes as input and plot the error in terms of the norm of the output difference.
Hardware Specification No The paper does not provide any specific hardware details used for running its experiments.
Software Dependencies No Chen Cai would like to thank ... Hy Truong Son for providing the pytorch implementation of IGN.
Experiment Setup Yes We use a 5-layer IGN with hidden dimension 16. We take graphs of different sizes as input and plot the error in terms of the norm of the output difference. Similar to (Keriven et al., 2020), we consider untrained IGN with random weights to assess how convergence depends on the choice of architecture rather than learning.