Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering

Authors: Dongxiao He, Lianze Shan, Jitao Zhao, Hengrui Zhang, Zhen Wang, Weixiong Zhang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate SGRL across various downstream tasks on benchmark datasets, demonstrating its efficacy and superiority over existing GCL methods. Our findings underscore the significance of representation scattering in GCL and provide a structured framework for harnessing this mechanism to advance graph representation learning. The code of SGRL is at https://github.com/hedongxiao-tju/SGRL.
Researcher Affiliation Academia Dongxiao He1, Lianze Shan1, Jitao Zhao1, Hengrui Zhang2, Zhen Wang3 , Weixiong Zhang4 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States 3School of Cybersecurity, Northwestern Polytechnical University, Xi an, China 4Department of Computing, Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 1{hedongxiao, shanlz2119, zjtao}@tju.edu.cn, 2hzhan55@uic.edu, 3w-zhen@nwpu.edu.cn, 4weixiong.zhang@polyu.edu.hk
Pseudocode No No explicit pseudocode or algorithm blocks were found.
Open Source Code Yes The code of SGRL is at https://github.com/hedongxiao-tju/SGRL.
Open Datasets Yes We evaluated SGRL on the five of the most widely used benchmark datasets, including Amazon Photo (Photo) and Amazon-Computers (Computers) [25], Wiki CS [26], Coauthor-CS (Co.CS) and Coauthor-Physics (Co.Physics) [27]. Detailed information of these datasets is in Appendix B.2.
Dataset Splits Yes Specifically, we trained the downstream classifier using 10% of the data and tested the classifier on the remaining 90%.
Hardware Specification Yes All experiments were carried out on an NVIDIA Ge Force GTX 3090 GPU, which comes equipped with 24GB of memory.
Software Dependencies Yes For model development, we utilized Py Torch version 1.13.1 [33], along with Py Torch Geometric version 2.3.0 [34], which also served as the source for all the datasets used in our study.
Experiment Setup Yes Table 4: Detailed hyperparameters of SGRL. Dataset Hidden dim online learning rate target learning rate Training epochs Activation momentum Wiki CS 1024 0.00001 0.00001 500 PRe LU 0.99 Amazon-Computers 1024 0.001 0.001 700 PRe LU 0.99 Amazon-Photo 1024 0.001 0.001 700 PRe LU 0.99 Coauthor-CS 1024 0.001 0.001 700 PRe LU 0.99 Coauthor-Physics 1024 0.0001 0.00001 1000 PRe LU 0.99