Sign and Basis Invariant Networks for Spectral Graph Representation Learning

Authors: Derek Lim, Joshua David Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, Stefanie Jegelka

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our networks significantly outperform existing baselines on molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. Our code is available at https://github.com/cptq/Sign Net-Basis Net. 1 INTRODUCTION
Researcher Affiliation Collaboration Derek Lim , Joshua Robinson MIT CSAIL {dereklim, joshrob}@mit.edu Lingxiao Zhao Carnegie Mellon University Tess Smidt MIT EECS & MIT RLE Suvrit Sra MIT LIDS Haggai Maron NVIDIA Research Stefanie Jegelka MIT CSAIL
Pseudocode Yes Figure 5: Py Torch-like pseudo-code for using Sign Net with a GNN prediction model, where ϕ = GIN and ρ = MLP as in the ZINC molecular graph regression experiments.
Open Source Code Yes Our code is available at https://github.com/cptq/Sign Net-Basis Net.
Open Datasets Yes The data we use are all freely available online. The datasets we use are ZINC (Irwin et al., 2012), Alchemy (Chen et al., 2019a), the synthetic counting substructures dataset (Chen et al., 2020), the multi-task graph property regression synthetic dataset (Corso et al., 2020) (MIT License), the images dataset used by Balcilar et al. (2020) (GNU General Public License v3.0), the cat mesh from free3d. com/3d-model/cat-v1--522281.html (Personal Use License), and the human mesh from turbosquid.com/3d-models/water-park-slides-3d-max/1093267 (Turbo Squid 3D Model License).
Dataset Splits Yes Alchemy. We run our method and compare with the state-of-the-art on Alchemy (with 10,000 training graphs). We use the same data split as Morris et al. (2020b).
Hardware Specification Yes Most experiments were run on a server with 8 NVIDIA RTX 2080 Ti GPUs.
Software Dependencies No We run all of our experiments in Python, using the Py Torch (Paszke et al., 2019) framework (license URL). We also make use of Deep Graph Library (DGL) (Wang et al., 2019) (Apache License 2.0), and Py Torch Geometric (Py G) (Fey & Lenssen, 2019) (MIT License) for experiments with graph data.
Experiment Setup Yes We train with an Adam optimizer (Kingma & Ba, 2014) with a starting learning rate of .001, and a minimum learning rate of .000001. The learning rate schedule cuts the learning rate in half with a patience of 20 epochs, and training ends when we reach the minimum learning rate.