Perfect Alignment May be Poisonous to Graph Contrastive Learning
Authors: Jingyu Liu, Huayi Tang, Yong Liu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experiments In this section, we mainly evaluate the performance of the methods we proposed on six datasets: Cora, Cite Seer, Pub Med, DBLP, Amazon-Photo and Amazon-Computer. We select 3 contrastive learning GNN, GRACE (Zhu et al., 2020), GCA (Zhu et al., 2021) and AD-GCL (Suresh et al., 2021), then we integrate those models with our proposed methods to verify its applicability and correctness of the theory. |
| Researcher Affiliation | Academia | 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China. Correspondence to: Yong Liu <liuyonggsai@ruc.edu.cn>. |
| Pseudocode | No | The paper describes algorithms and methods in prose but does not include any structured pseudocode or algorithm blocks labeled as such. |
| Open Source Code | Yes | The code is available at https: //github.com/somebodyhh1/GRACEIS |
| Open Datasets | Yes | Table 3. Dataset download links Dataset Download Link Cora https://github.com/kimiyoung/planetoid/raw/master/data Citeseer https://github.com//kimiyoung/planetoid/raw/master/data Pubmed https://github.com/kimiyoung/planetoid/raw/master/data DBLP https://github.com/abojchevski/graph2gauss/raw/master/data/dblp.npz Amazon-Photo https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_photo.npz Amazon-Computers https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_computers.npz |
| Dataset Splits | No | In D.1. Datasets and Experimental Details, the paper states: 'in all 6 datasets we randomly choose 10% of nodes for training and the rest for testing.' This specifies a training and testing split but does not explicitly define a separate validation split or the proportion of the 'rest' allocated for testing versus validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. It only vaguely mentions 'Public Computing Cloud, Renmin University of China' in the acknowledgements, which is not a specific hardware specification. |
| Software Dependencies | No | The paper mentions software components such as 'GCNConv' and 'logistic regression for downstream classification the solver is liblinear', and states optimization by 'Adam', but it does not provide specific version numbers for these software components or programming languages (e.g., Python, PyTorch). |
| Experiment Setup | Yes | Table 4. Hyperparameters settings Dataset Learning rate Weight decay num layers τ Epochs Hidden dim Activation Cora 5 4 10 6 2 0.4 200 128 Re LU Citeseer 10 4 10 6 2 0.9 200 256 PRe LU Pubmed 10 4 10 6 2 0.7 200 256 Re LU DBLP 10 4 10 6 2 0.7 200 256 Re LU Amazon-Photo 10 4 10 6 2 0.3 200 256 Re LU Amazon-Computers 10 4 10 6 2 0.2 200 128 RRe LU |