Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

Authors: Sihang Li, Xiang Wang, An Zhang, Yingxin Wu, Xiangnan He, Tat-Seng Chua

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On biochemical molecule and social network benchmark datasets, the state-of-the-art performance of RGCL demonstrates the effectiveness of rationale-aware views for contrastive learning. Our codes are available at https: //github.com/lsh0520/RGCL. In this section, extensive experiments are conducted to answer two research questions:
Researcher Affiliation Academia 1School of Information Science and Technology, University of Science and Technology of China, Hefei, China 2School of Cyber Science and Technology, University of Science and Technology of China, Hefei, China 3Sea-NEx T Joint Lab, National University of Singapore, Singapore 4School of Data Science, University of Science and Technology of China, Hefei, China.
Pseudocode Yes A. Rationale-aware Graph Contrastive Learning (RGCL) algorithm Algorithm 1 RGCL algorithm
Open Source Code Yes Our codes are available at https: //github.com/lsh0520/RGCL.
Open Datasets Yes On MNIST-Superpixel and MUTAG datasets... we use Zinc-2M 2 million unlabeled molecule graphs sampled from the ZINC15 database (Sterling & Irwin, 2015) to pre-train the backbone model and rationale generator.
Dataset Splits No The paper describes data utilization (pre-training, fine-tuning) and splitting methods (scaffold split) but does not provide explicit train/validation/test percentages, counts, or specific predefined split citations.
Hardware Specification Yes And in realistic implementation on our platform (Ge Force RTX 2080 Ti and Intel(R) Core(TM) i9-9900X)
Software Dependencies No The paper mentions software components like GNN, GCN, GIN, and MLP for model architectures but does not specify their version numbers or the versions of underlying libraries (e.g., PyTorch, TensorFlow) or programming languages used.
Experiment Setup Yes E. Model Structure and Hyperparameters To make a fair comparison, we follow the backbone model settings in You et al. (2020). Our model architectures, the mainbody of which includes GCN (Kipf & Welling, 2017) and GIN (Xu et al., 2019), and corresponding hyperparameters are summarized in Table 8.