A New Mechanism for Eliminating Implicit Conflict in Graph Contrastive Learning

Authors: Dongxiao He, Jitao Zhao, Cuiying Huo, Yongqi Huang, Yuxiao Huang, Zhiyong Feng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our method. ... Validation: We conducte experiments on six datasets and two downstream tasks. An abundance of experimental evidence confirms the efficacy of Pi GCL
Researcher Affiliation Academia 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Department of Data Science, George Washington University, NW Washington DC, America {hedongxiao, zjtao, huocuiying, yqhuang, zyfeng}@tju.edu.cn, yuxiaohuang@email.gwu.edu
Pseudocode No No, the paper includes a diagram (Fig. 3) illustrating the method's overview but does not provide structured pseudocode or an algorithm block.
Open Source Code Yes More details and the source code are available at https://github.com/hedongxiao-tju/Pi GCL.
Open Datasets Yes We evaluate our method on six widely-used datasets, Cora, Citeseer, and Pub Med from the Plantoid (Kipf and Welling 2017), Photo and Computers from the Amazon (Mc Auley et al. 2015), and CS (Sinha et al. 2015) from the Coauthor.
Dataset Splits Yes For the classification tasks, ... Specifically, we use 10% of the data for training the downstream classifier and the remaining 90% for testing.
Hardware Specification Yes All experiments were conducted on a server equipped an RTX 3090 GPU and an i5-12400 CPU.
Software Dependencies No No specific software dependencies with version numbers are mentioned in the main text of the paper.
Experiment Setup No The paper states: 'The settings for the hyperparameters can be found in the Appendix C.' This indicates that specific experimental setup details such as hyperparameters are not provided in the main text.