ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning
Authors: Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z. Li
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that Pro GCL brings notable and consistent improvements over base GCL methods and yields multiple state-of-the-art results on several unsupervised benchmarks or even exceeds the performance of supervised ones. Also, Pro GCL is readily pluggable into various negatives-based GCL methods for performance improvement. |
| Researcher Affiliation | Academia | 1Westlake University 2Westlake Institute for Advanced Study 3School of Computer Science, Zhejiang University. Correspondence to: Jun Xia <xiajun@westlake.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Pro GCL-weight & -mix (Transductive) Input: T , G, f, g, N, normalized cosine similarity s, epoch for fitting BMM E, mode ( weight or mix ). |
| Open Source Code | Yes | We release the code at https://github.com/ junxia97/Pro GCL. |
| Open Datasets | Yes | We conduct experiments on seven widely-used datasets including Amazon-Photo, Amazon-Computers, Wiki-CS, Coauthor-CS, Reddit, Flickr and ogbn-ar Xiv. For transductive tasks, we split Amazon-Photo, Amazon Computers, Wiki-CS and Coauthor-CS for the training, validation and testing following (Zhu et al., 2021c). For inductive task, we split Reddit and Flickr following (Velickovic et al., 2019; Zeng et al., 2019). The experimental setting of ogbn-ar Xiv is the same as BGRL (Thakoor et al., 2021). |
| Dataset Splits | Yes | for transductive tasks, we split Amazon-Photo, Amazon Computers, Wiki-CS and Coauthor-CS for the training, validation and testing following (Zhu et al., 2021c). |
| Hardware Specification | Yes | OOM : out of memory on a 32GB GPU. |
| Software Dependencies | No | The paper mentions software components like 'Adam SGD optimizer' and 'Glorot initialization', and uses frameworks like 'GCN' and 'Graph SAGE', but it does not specify exact version numbers for any libraries, programming languages, or specific software dependencies. |
| Experiment Setup | Yes | Other hyper-parameters of Pro GCL can be seen in Table 8. For transductive task, the two hyperparameters were chosen in a grid E {50, 100, 200, 300, 400, 600, 800} and winit {0.01, 0.05, 0.10, 0.15, 0.20, 0.25}. We further study the influence of E, winit, I and M in Figure 10. |