Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective

Authors: Qirui Ji, Jiangmeng Li, Jie Hu, Rui Wang, Changwen Zheng, Fanjiang Xu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The conducted exploratory experiments attest to the feasibility of the aforementioned roadmap. Empirically, compared with state-of-the-art methods, our method can yield significant performance boosts on various benchmarks with respect to discriminability and transferability.
Researcher Affiliation Academia 1Science & Technology on Integrated Information System Laboratory, Institute of Software Chinese Academy of Sciences 2State Key Laboratory of Intelligent Game 3University of Chinese Academy of Sciences 4State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences
Pseudocode Yes Algorithm 1: The DRGRL training algorithm
Open Source Code Yes The code implementation of our method is available at https://github.com/Byron Ji/DRGCL.
Open Datasets Yes For unsupervised learning, we benchmark DRGCL on eight established datasets in TU datasets (Morris et al. 2020).
Dataset Splits Yes mean 10-fold cross-validation accuracy with 5 runs.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The details of our model architectures and corresponding hyper-parameters are summarized in Table 6. Table 6 includes details such as Backbone neuron [32,32,32], Projection neuron [512,512,512], Pre-train lr 0.01, Finetune lr {0.01,0.001,0.0001}, Temperature τ 0.1, Traning epochs 20, Trade-off parameter λ 0.001, Trade-off parameter α 10.