Architecture Matters: Uncovering Implicit Mechanisms in Graph Contrastive Learning

Authors: Xiaojun Guo, Yifei Wang, Zeming Wei, Yisen Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we challenge the commonly held beliefs regarding GCL by revealing its distinct characteristics in comparison to VCL. Specifically, we perform a systematic study with a wide range of representative GCL methods on well-known benchmarks and find three intriguing properties:
Researcher Affiliation Academia Xiaojun Guo1 Yifei Wang2 Zeming Wei2 Yisen Wang1, 3 1National Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University 2School of Mathematical Sciences, Peking University 3Institute for Artificial Intelligence, Peking University
Pseudocode No The paper describes methods and equations but does not include any blocks or figures explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Code is available at https: //github.com/PKU-ML/Architecture Matters GCL.
Open Datasets Yes To illustrate this, we conduct experiments on the CIFAR-10 dataset [25], comparing the Info NCE loss (including positive samples) and the uniformity loss (excluding positive samples). ... we conduct comprehensive experiments on both node classification and graph classification tasks. As shown in Table 2, the accuracy gap between the contrastive loss (Contrast) and the loss without positives (NO Pos) is relatively narrow across most node classification datasets. Similarly, in Table 3, we observe similar phenomena in graph classification, where using loss without positive samples sometimes even outperforms the contrastive loss.
Dataset Splits Yes In the training procedure, a 2-layer Graph Convolutional Network (GCN) [24] is adopted as the encoder. ... In the evaluation procedure, we randomly split each dataset with a training ratio of 0.8 and a test ratio of 0.1, and hyperparameters are fixed as the same for all the experiments. ... We report the mean 10-fold cross-validation accuracy with standard deviation.
Hardware Specification Yes All experiments are conducted on a single 24GB NVIDIA Ge Force RTX 3090.
Software Dependencies No The paper mentions using 'Adam SGD optimizer [23]' and 'PyTorch' but does not specify version numbers for these or other software components, which is necessary for reproducibility.
Experiment Setup Yes We grid search augmentation ratios in {0.0, 0.1, 0.2, 0.3, 0.4}. All experiments are trained with Adam SGD optimizer [23] with the learning rate selected from {0.01, 0.001, 0.0005}. The epoch number is selected from {200, 1000, 2000}. The other parameters are fixed for all datasets.