Regularizing Graph Neural Networks via Consistency-Diversity Graph Augmentations

Authors: Deyu Bo, Binbin Hu, Xiao Wang, Zhiqiang Zhang, Chuan Shi, Jun Zhou3913-3921

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five real-world graphs validate the superiority of our method in improving the accuracy and generalization of GNNs.
Researcher Affiliation Collaboration 1 Beijing University of Posts and Telecommunications 2 Ant Group, Hangzhou, China {bodeyu, xiaowang, shichuan}@bupt.edu.cn, {bin.hbb,lingyao.zzq,jun.zhoujun}@antfin.com
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide an explicit statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes Cora, Citeseer and Pubmed from (Kipf and Welling 2017) and two co-purchase datasets, e.g., Amazon Computers and Amazon Photo from (Shchur et al. 2018).
Dataset Splits Yes In the standard split, each class has 20 labeled nodes, and 500 nodes for validation, 1000 nodes for testing. The second is a less label split of citation networks, where each class has 5 labeled nodes, and the set of validation and testing nodes is same to the standard split. The third split is the random split of co-purchase datasets, where 20 nodes per class are randomly sampled for training, 30 nodes for validation and others for testing, as suggested by (Shchur et al. 2018).
Hardware Specification No The paper does not provide specific details on the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'GCNs' but does not provide specific version numbers for any libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used, which are essential for reproducibility.
Experiment Setup Yes The hyper-parameters are set as follows: learning rate=0.01, weight decay=1e-3, hidden unit=32 and Adam optimizer (Kingma and Ba 2015) for all methods. For the benchmarks, if the original papers provide the hyper-parameters, we set them as the authors suggested. For NASA, dropout rate is searched in {0.1, ..., 0.9}, temperature of sharpening is searched in {0.1, ..., 1.0} and α = {0.1, ..., 1.0} for all datasets. We run NASA for 1000 epochs and select the model with the lowest validation loss for test.