MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning

Authors: Xumeng Gong, Cheng Yang, Chuan Shi

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that MA-GCL can achieve state-of-the-art performance on node classification benchmarks by applying the three tricks on a simple base model. Extensive studies also validate our motivation and the effectiveness of each trick.
Researcher Affiliation Academia Xumeng Gong, Cheng Yang* , Chuan Shi Beijing University of Posts and Telecommunications Xumeng1141@bupt.edu.cn,yangcheng@bupt.edu.cn, shichuan@bupt.edu.cn
Pseudocode Yes Algorithm 1: Implementation of MA-GCL
Open Source Code Yes (Code, data and appendix are available at https://github.com/GXM1141/MA-GCL. )
Open Datasets Yes We evaluate our approach on six benchmark datasets of node classification, which have been widely used in previous GCL methods. Specifically, citation datasets include Cora, Cite Seer and Pub Med (Kipf and Welling 2016), co-purchase and co-author datasets include Amazon-Photo, Amazon-Computers and Coauthor-CS (Shchur et al. 2018).
Dataset Splits Yes For co-purchase and co-author datasets, we randomly split the datasets, where 10%, 10%, and the rest 80% of nodes are selected for the training, validation and test set, respectively (Zhu et al. 2021; Zhang et al. 2021).
Hardware Specification No The paper states 'The appendix also includes details of datasets, environment and hyper-parameter settings.' However, 'environment' is too vague to satisfy the requirement for specific hardware details like GPU/CPU models or memory.
Software Dependencies No The paper states 'The appendix also includes details of datasets, environment and hyper-parameter settings.' However, 'environment' is too vague and does not include specific software names with version numbers for reproducibility.
Experiment Setup Yes More details about hyper-parameter settings are provided in Appendix.C.