Rethinking the Promotion Brought by Contrastive Learning to Semi-Supervised Node Classification
Authors: Deli Chen, Yankai Lin, Lei Li, Xuancheng Ren, Peng Li, Jie Zhou, Xu Sun
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six benchmark graph datasets, including the enormous OGB-Products graph, show that TIFA-GCL can bring a larger improvement than existing GCL methods in both transductive and inductive settings. Further experiments demonstrate the generalizability and interpretability of TIFA-GCL. |
| Researcher Affiliation | Collaboration | Deli Chen1, Yankai Lin1, Lei Li2, Xuancheng Ren2, Peng Li3, Jie Zhou1, Xu Sun2 1Pattern Recognition Center, We Chat AI, Tencent Inc., China 2MOE Key Lab of Computational Linguistics, School of Computer Science, Peking University 3Institute for AI Industry Research (AIR), Tsinghua University |
| Pseudocode | No | The paper mentions 'The details of TIFA-graph perturbation algorithm is shown in Appendix B1' and 'a novel subgraph sampling (shown in Appendix B2)', but the pseudocode itself is not provided in the main text. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing open-source code for the methodology or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on six widely-used graph datasets, namely paper citations networks [Sen et al., 2008] (CORA,Cite Seer and Pubmed; these three datasets are the mostly-used benchmark networks [Yang et al., 2016; Kipf and W., 2017; Veliˇckovi c et al., 2018] in the SSNC studies), Amazon Co-purchase networks [Shchur et al., 2018] (Photo and Computers) and the enormous OGB-Products graph [Hu et al., 2020]. |
| Dataset Splits | No | The paper states 'We follow the benchmark setting [Kipf and W., 2017; Yang et al., 2016; Hamilton et al., 2017]' and mentions 'More details about the datasets, experiment settings and hyper-parameters can be found in Appendix C', but does not explicitly provide the training, validation, or test split percentages or sample counts in the provided text. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running its experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers, such as 'PyTorch 1.9' or 'Python 3.8'. |
| Experiment Setup | Yes | More details about the datasets, experiment settings and hyper-parameters can be found in Appendix C. |