Expressive Sign Equivariant Networks for Spectral Geometric Learning
Authors: Derek Lim, Joshua Robinson, Stefanie Jegelka, Haggai Maron
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate our theoretical results, we conduct various numerical experiments on synthetic datasets. Experiments in link prediction, n-body problems, and node clustering in graphs support our theory and demonstrate the utility of sign equivariant models. |
| Researcher Affiliation | Collaboration | Derek Lim MIT CSAIL dereklim@mit.edu Joshua Robinson Stanford University Stefanie Jegelka TU Munich, MIT CSAIL Haggai Maron Technion, NVIDIA |
| Pseudocode | No | The paper describes methods in prose and mathematical formulations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | Our codes for our models and experiments will be open-sourced and permissively licensed. |
| Open Datasets | Yes | We test models on the CLUSTER dataset [Dwivedi et al., 2022a] for semi-supervised node clustering (viewed as node classification) in synthetic graphs. ... We follow the experimental setting and build on the code of Puny et al. [2022] (no license as far as we can tell) for the n-body learning task. The code for generating the data stems from Kipf et al. [2018] (MIT License) and Fuchs et al. [2020] (MIT License). |
| Dataset Splits | Yes | The train/validation/test split is 80%/10%/10%, and is chosen uniformly at random. |
| Hardware Specification | Yes | Each experiment was run on a single NVIDIA V100 GPU with 32GB memory. |
| Software Dependencies | No | The paper mentions software like Network X, Adam optimizer, and Graph GPS framework, but does not provide specific version numbers for these software dependencies, which are necessary for full reproducibility. |
| Experiment Setup | Yes | We train each method for 100 epochs with an Adam optimizer [Kingma and Ba, 2015] at a learning rate of .01. |