NodeMixup: Tackling Under-Reaching for Graph Neural Networks

Authors: Weigang Lu, Ziyu Guan, Wei Zhao, Yaming Yang, Long Jin

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the efficacy of Node Mixup in assisting GNNs in handling under-reaching. We evaluate Node Mixup on the semi-supervised node classification task. We use five medium-scale datasets, i.e., CORA, CITESEER, and PUBMED (Yang, Cohen, and Salakhudinov 2016), COAUTHOR CS and COAUTHOR PHYSICS (Shchur et al. 2018), and a large-scale graph, i.e., OGBN-ARXIV (Hu et al. 2020).
Researcher Affiliation Academia School of Computer Science and Technology, Xidian University, China {wglu@stu., zyguan@, ywzhao@mail., yym@, jin@stu.}xidian.edu.cn
Pseudocode No The paper describes the Node Mixup method with text and figures, but does not include a structured pseudocode or algorithm block.
Open Source Code Yes The source code is available at https://github.com/Weigang Lu/Node Mixup.
Open Datasets Yes We use five medium-scale datasets, i.e., CORA, CITESEER, and PUBMED (Yang, Cohen, and Salakhudinov 2016), COAUTHOR CS and COAUTHOR PHYSICS (Shchur et al. 2018), and a large-scale graph, i.e., OGBN-ARXIV (Hu et al. 2020).
Dataset Splits Yes For CORA, CITESEER, and PUBMED datasets, we stick to the public splits (20 nodes per class for training, 1000 nodes for validation, and 500 nodes for testing) used in (Yang, Cohen, and Salakhudinov 2016). For COAUTHOR CS and COAUTHOR PHYSICS, we follow the splits in (Shchur et al. 2018), i.e., 20 labeled nodes per class as the training set, 30 nodes per class as the validation set, and the rest as the test set.
Hardware Specification Yes All the experiments are conducted on an NVIDIA GTX 1080Ti GPU.
Software Dependencies No The paper mentions 'Py Torch Geometric Library' and 'Adam optimizer' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For our proposed Node Mixup, which is implemented by Py Torch Geometric Library (Fey and Lenssen 2019) with the Adam optimizer (Kingma and Ba 2015), we search both λinter and λintra in {1, 1.1, , 1.5}, βd and βs in {0.5, 1, 1.5, 2}, and γ in {0.5, 0.7, 0.9}.