Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization
Authors: Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, Irina Rish
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose an approach that combines both these principles and demonstrate its effectiveness in several experiments. |
| Researcher Affiliation | Academia | Mila Quebec AI Institute, Université de Montréal. Correspondence to: kartik.ahuja@mila.quebec. |
| Pseudocode | No | The paper does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | The code for experiments can be found at https://github.com/ahujak/IB-IRM. |
| Open Datasets | Yes | We use all the datasets in Table 2, Terra Incognita dataset (Beery et al., 2018), and COCO (Ahmed et al., 2021). |
| Dataset Splits | Yes | We follow the same protocol for tuning hyperparameters from Aubin et al. (2021); Arjovsky et al. (2019) for their respective datasets (see the Appendix for more details). |
| Hardware Specification | Yes | All the experiments were run on a server with an Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz processor and NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions using code from external GitHub repositories, but it does not specify software dependencies like programming language versions or library versions (e.g., Python 3.x, PyTorch 1.x) that were used for their experiments. |
| Experiment Setup | Yes | We follow the same protocol for tuning hyperparameters from Aubin et al. (2021); Arjovsky et al. (2019) for their respective datasets (see the Appendix for more details). For reproducibility, we use a fixed random seed of 0 across all experiments. |