pathGCN: Learning General Graph Spatial Operators from Paths

Authors: Moshe Eliasof, Eldad Haber, Eran Treister

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments on numerous datasets suggest that by properly learning both the spatial and point-wise convolutions, phenomena like over-smoothing can be inherently avoided, and new state-of-the-art performance is achieved.
Researcher Affiliation Academia 1Department of Computer Science, Ben-Gurion University, Israel. 2Department of Earth, Ocean and Atmospheric Sciences, University of British Columbia, Canada..
Pseudocode No No structured pseudocode or algorithm blocks were found. The network architecture is described using tables and text in Appendix A.
Open Source Code No The paper states 'Our code is implemented using Py Torch (Paszke et al., 2019) and Py Torch-Geometric (Fey & Lenssen, 2019)' but does not provide a link or explicit statement that the code for their proposed method is publicly available.
Open Datasets Yes Here, we use three datasets Cora, Citeseer and Pubmed (Sen et al., 2008). For all datasets we use the standard training/validation/testing split as in (Yang et al., 2016)...
Dataset Splits Yes For all datasets we use the standard training/validation/testing split as in (Yang et al., 2016), with 20 nodes per class for training, 500 validation nodes and 1,000 testing nodes and follow the training scheme of (Chen et al., 2020).
Hardware Specification Yes Our code is implemented using Py Torch (Paszke et al., 2019) and Py Torch-Geometric (Fey & Lenssen, 2019) and trained on an Nvidia Titan RTX GPU.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2019) and Py Torch-Geometric (Fey & Lenssen, 2019)' as implementation tools, but does not provide specific version numbers for these or other key software components used in their experiments.
Experiment Setup Yes A detailed description of the network architecture is given in Appendix A. We use the Adam (Kingma & Ba, 2014) optimizer in all experiments, and perform grid search over the hyper-parameters of our network. The selected hyper-parameters are reported in Appendix B.