Invariant Learning via Probability of Sufficient and Necessary Causes

Authors: Mengyue Yang, Zhen Fang, Yonggang Zhang, Yali Du, Furui Liu, Jean-Francois Ton, Jianhong Wang, Jun Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real-world benchmarks demonstrate the effectiveness of the proposed method.
Researcher Affiliation Collaboration 1University College London, 2University of Technology Sydney 3Hong Kong Baptist University, 4King s College London 5Zhejiang Lab, 6Byte Dance Research, 7University of Manchester
Pseudocode No The paper describes the proposed algorithm (Ca SN) and its optimization process in prose, but does not provide a formal pseudocode block or algorithm box.
Open Source Code Yes The detailed implementation can be found at the Git Hub repository: https://github.com/ymy4323460/Ca SN.
Open Datasets Yes We test the performance on commonly used Colored Mnist (Ahuja et al., 2020a), PACS (Li et al., 2017), and VLCS (Fang et al., 2013) datasets.
Dataset Splits Yes The training dataset is randomly split as training and validation datasets, the hyperparameters are selected on the validation dataset, which maximizes the performance of the validation dataset.
Hardware Specification Yes All the experiments are conducted based on a server with a 16-core CPU, 128g memory and RTX 5000 GPU.
Software Dependencies No The paper mentions using "Domain Bed" codebase and specific network architectures like "Res Net-50" and "SGD as the optimizer", but does not provide specific version numbers for these software components or programming languages.
Experiment Setup Yes The Hyperparameters are shown in Table 3. The general hyperparameters (e.g. Res Net, Mnist, Not Minist) are directly given from Table 8 in Gulrajani & Lopez-Paz (2020). All the experiments run for 2 times.