Invariant and Transportable Representations for Anti-Causal Domain Shifts

Authors: Yibo Jiang, Victor Veitch

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm.
Researcher Affiliation Collaboration Yibo Jiang1 and Victor Veitch2,3 1Department of Computer Science, University of Chicago 2Department of Statistics, University of Chicago 3Google Research
Pseudocode No The paper describes the learning algorithm in detail using text and mathematical equations in Section 4, but it does not include a distinct 'Pseudocode' or 'Algorithm' block or figure.
Open Source Code Yes Code is available at https://github.com/ybjiaang/ACTIR.
Open Datasets Yes Color MNIST modifies the original MNIST dataset [Arj+19]. ... The goal of the Camelyon17 dataset [Ban+18] is to predict the existence of a tumor given a region of tissue.
Dataset Splits Yes We create two training domains with βe 2 {0.95, 0.7}, one validation domain with βe = 0.6 and one test domain with βe = 0.1. (Section 6.1 Synthetic Dataset). ... Following the WILDS benchmark [Koh+21], we use 3 for training, 1 for validation, and the last one for test. (Section 6.3 Camelyon17)
Hardware Specification No The paper states, 'We also acknowledge the University of Chicago s Research Computing Center for providing computing resources.' (Acknowledgments). This is a general acknowledgment of a computing resource but does not specify any particular hardware models (e.g., GPU type, CPU, or memory details).
Software Dependencies No The paper mentions the use of 'Adam optimizer' and 'Res Net-18 model' but does not specify exact version numbers for these software components or any other libraries used (e.g., 'PyTorch 1.9').
Experiment Setup Yes For the fine-tuning test, we run 20 steps with a learning rate 10 2. (Section 6.1 Synthetic Dataset, Section 6.2 Color MNIST). ... We use a three-layer neural network with hidden size 8 and Re LU activation for Φ and train the neural network with Adam optimizer. (Section 6.1 Synthetic Dataset). ... We use a pre-trained Res Net-18 model for our Φ and train the whole model using Adam optimizer with a learning rate 10 4. (Section 6.3 Camelyon17).