Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs

Authors: Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, MA Kaili, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 16 synthetic or real-world datasets, including a challenging setting Drug OOD,from AI-aided drug discovery, validate the superior OOD performance of CIGA1.
Researcher Affiliation Collaboration Yongqiang Chen1 , Yonggang Zhang2, Yatao Bian3, Han Yang1, Kaili Ma1, Binghui Xie1 1The Chinese University of Hong Kong 2Hong Kong Baptist University {yqchen,hyang,klma,bhxie21,jcheng}@cse.cuhk.edu.hk yatao.bian@gmail.com Tongliang Liu4, Bo Han2, James Cheng1 3Tencent AI Lab 4TML Lab, The University of Sydney tongliang.liu@sydney.edu.au {csygzhang,bhanml}@comp.hkbu.edu.hk
Pseudocode No The paper describes the CIGA framework and its objectives but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Code is available at https://github.com/LFhase/CIGA.
Open Datasets Yes We use the SPMotif datasets from DIR [104]... we also use Drug OOD [40] from AI-aided drug discovery... convert the Colored MNIST from IRM [4]... and split Graph-SST [122]... We use the datasets in Bevilacqua et al. [11] that are converted from TU benchmarks [67]. and The data used are all publicly available datasets.
Dataset Splits Yes We repeat the evaluation multiple times, select models based on the validation performances, and report the mean and standard deviation of the corresponding metric. and Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Sec. G.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Sec. G.3.
Software Dependencies No The paper mentions that training details are provided in Appendix G, but does not explicitly list specific software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8) in the main text or the provided checklist for software.
Experiment Setup Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Sec. G. and To examine how sensitive CIGA is to the hyperparamters α and β for contrastive loss and hinge loss, respectively. We conduct experiments based on the hardest datasets from each table...