Equivariant Polynomials for Graph Neural Networks

Authors: Omri Puny, Derek Lim, Bobak Kiani, Haggai Maron, Yaron Lipman

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we enhance the expressivity of common GNN architectures by adding polynomial features or additional operations / aggregations inspired by our theory. These enhanced GNNs demonstrate stateof-the-art results in experiments across multiple graph learning benchmarks.
Researcher Affiliation Collaboration 1Weizmann Institute of Science 2MIT CSAIL 3NVIDIA Research 4Meta AI Research Centre for Artificial Intelligence.
Pseudocode Yes Algorithm 1 Decide if PH is computable by Fn or Fe.
Open Source Code No The paper does not contain an explicit statement about releasing its source code or a direct link to a code repository for the methodology described. It mentions using the PyTorch framework.
Open Datasets Yes Both data splits can be obtained from (Fey & Lenssen, 2019)
Dataset Splits Yes The protocol includes parameter budget (500K), predefined 4 random seeds and a learning rate decay scheme that reduces the rate based on the validation error (factor 0.5 and patience factor of 10 epochs).
Hardware Specification Yes models were trained using the LAMB optimizer (You et al., 2019) on a single Nvidia V-100 GPU.
Software Dependencies No The paper states: "The models were trained using the Py Torch framework (Paszke et al., 2019)". While PyTorch is mentioned with a citation, a specific version number (e.g., PyTorch 1.9) is not provided, making it not fully reproducible according to the criteria.
Experiment Setup Yes The protocol includes parameter budget (500K), predefined 4 random seeds and a learning rate decay scheme that reduces the rate based on the validation error (factor 0.5 and patience factor of 10 epochs). Initial learning rate was set to 0.002 and training stopped when reached 10 5. Batch size was set to 128.