Graph Random Neural Networks for Semi-Supervised Learning on Graphs

Authors: Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, Jie Tang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of-the-art GNN baselines on semi-supervised node classification. Finally, we show that GRAND mitigates the issues of over-smoothing and non-robustness, exhibiting better generalization behavior than existing GNNs. The source code of GRAND is publicly available at https://github.com/Grand20/grand.
Researcher Affiliation Collaboration Wenzheng Feng1 , Jie Zhang2 , Yuxiao Dong3, Yu Han1, Huanbo Luan1, Qian Xu2, Qiang Yang2, Evgeny Kharlamov4, Jie Tang1 1 Department of Computer Science and Technology, Tsinghua University 2We Bank Co., Ltd 3Microsoft Research 4 Bosch Center for Artificial Intelligence
Pseudocode Yes Algorithm 1 GRAND
Open Source Code Yes The source code of GRAND is publicly available at https://github.com/Grand20/grand.
Open Datasets Yes We conduct experiments on three benchmark graphs [42, 20, 35] Cora, Citeseer, and Pubmed and also report results on six publicly available and large datasets in Appendix C.1.
Dataset Splits Yes We follow exactly the same experimental procedure such as features and data splits as the standard GNN settings on semi-supervised graph learning [42, 20, 35]. The setup and reproducibility details are covered in Appendix A.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models) used for running its experiments.
Software Dependencies Yes All experiments were implemented with Python 3.6 and PyTorch 1.4.0.
Experiment Setup Yes For all datasets, we set the propagation step K=2. We train the model using Adam optimizer with initial learning rate 0.001 and weight decay 0.0005. The batch size is 128. For Drop Node and dropout, the drop rate δ is set to 0.5. For consistency regularization, we set λ to 1 and the temperature T to 0.5. The number of augmentations S is 10.