Improving Graph Neural Networks with Learnable Propagation Operators

Authors: Moshe Eliasof, Lars Ruthotto, Eran Treister

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our ωGCN and ωGAT perform on par with state-of-the-art methods.
Researcher Affiliation Academia 1Department of Computer Science, Ben-Gurion University, Beer-Sheva, Israel. 2Department of Mathematics, Emory University, Atlanta, Georgia, USA.
Pseudocode No The paper describes its architectures and equations but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper mentions that its code is implemented with PyTorch and PyTorch-Geometric but does not provide a link to a public code repository or explicitly state that the code is open-source or available.
Open Datasets Yes We employ the Cora, Citeseer, and Pubmed datasets using the standard training/validation/testing split by (Yang et al., 2016)... We employ the PPI dataset (Hamilton et al., 2017) for the inductive learning task... we experiment with graph classification on TUDatasets (Morris et al., 2020).
Dataset Splits Yes We employ the Cora, Citeseer, and Pubmed datasets using the standard training/validation/testing split by (Yang et al., 2016), with 20 nodes per class for training, 500 validation nodes, and 1,000 testing nodes.
Hardware Specification Yes Our code is implemented with Py Torch (Paszke et al., 2019) and Py Torch-Geometric (Fey & Lenssen, 2019) and trained on an Nvidia Titan RTX GPU.
Software Dependencies No The paper mentions 'Our code is implemented with Py Torch (Paszke et al., 2019) and Py Torch-Geometric (Fey & Lenssen, 2019)'. While it cites the foundational papers for these libraries, it does not specify explicit version numbers for the software dependencies beyond the publication year, which is insufficient for reproducibility.
Experiment Setup Yes We use the Adam (Kingma & Ba, 2014) optimizer in all experiments, and perform grid search to determine the hyper-parameters reported in Appendix F. The objective function in all experiments is the cross-entropy loss, besides inductive learning on PPI (Hamilton et al., 2017) where we use the binary cross-entropy loss.