Boosting Graph Contrastive Learning via Graph Contrastive Saliency
Authors: Chunyu Wei, Yu Wang, Bing Bai, Kai Ni, David Brady, Lu Fang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evidence on 16 benchmark datasets demonstrates the exclusive merits of the GCS-based framework. We extensively evaluate GCS-based framework on 16 benchmarks for molecule property-related and social network-related tasks. Empirical results demonstrate the superior performance of GCS compared to state-of-the-art graph contrastive learning methods, including AD-GCL (Suresh et al., 2021) and RGCL (Li et al., 2022b). |
| Researcher Affiliation | Collaboration | 1Department of Electronic Engineering, Tsinghua University, Beijing, China 2Holo Technology (Beijing) Co., Ltd., Beijing, China 3University of Arizona, Arizona, USA. |
| Pseudocode | Yes | A. Algorithm / Algorithm 1 Graph Contrastive Saliency for Graph Contrastive Learning / Algorithm 2 Graph Contrastive Saliency (GCS) |
| Open Source Code | Yes | Code is available at https: //github.com/weicy15/GCS. |
| Open Datasets | Yes | We used datasets from TU Dataset (Morris et al., 2020) for evaluations. Following Graph CL (You et al., 2020), we used a 5-layer GIN (Xu et al., 2019) with a hidden size of 128 as the graph encoder and utilized an SVM as the classifier. The GIN was trained with a batch size of 128 and a learning rate of 0.001. We conducted a 10-fold cross-validation on each dataset, and each experiment was repeated 5 times. ... We first conducted self-supervised pre-training on the pre-processed Ch EMBL dataset (Mayr et al., 2018) for 100 epochs. Subsequently, we fine-tuned the backbone model on 8 benchmark multi-task binary classification datasets in the biochemistry domain, which are included in Molecule Net (Wu et al., 2018). |
| Dataset Splits | Yes | We conducted a 10-fold cross-validation on each dataset, and each experiment was repeated 5 times. ... We evaluated the mean and standard deviation of ROC-AUC scores of 10 runs with different random seeds on each downstream dataset, which is consistent with the baselines. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions using 'Pytorch Geometric Library' and 'GIN' (Graph Isomorphism Network) as the backbone, but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For fair comparison, we follow the backbone setting in You et al. (2020), which adopt the GIN as the graph encoder (Xu et al., 2019). We summarize the corresponding hyperparameters in Table 10. ... The GIN was trained with a batch size of 128 and a learning rate of 0.001. ... We first conducted self-supervised pre-training on the pre-processed Ch EMBL dataset (Mayr et al., 2018) for 100 epochs. |