An Information-theoretic Approach to Distribution Shifts
Authors: Marco Federici, Ryota Tomioka, Patrick Forré
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | From our theoretical analysis and empirical evaluation, we conclude that the model selection procedure needs to be guided by careful considerations regarding the observed data, the factors used for correction, and the structure of the data-generating process. and 4 Experiments We evaluate the effectiveness of the criteria presented in section 3 and some of their most popular implementations on multiple versions of the CMNIST dataset (Arjovsky et al., 2019) produced by altering the data-generating process (figure 2a) to underline the shortcomings of the different methods. |
| Researcher Affiliation | Collaboration | Marco Federici AMLab University of Amsterdam m.federici@uva.nl Ryota Tomioka Microsoft Research Cambridge, UK ryoto@microsoft.com Patrick Forré AI4Science Lab, AMLab University of Amsterdam p.d.forre@uva.nl |
| Pseudocode | No | The paper presents theoretical formulations and descriptions of criteria and models but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The implementation of the models in this work is available at https://github.com/mfederici/dsit |
| Open Datasets | Yes | We evaluate the effectiveness of the criteria presented in section 3 and some of their most popular implementations on multiple versions of the CMNIST dataset (Arjovsky et al., 2019) |
| Dataset Splits | No | The paper mentions using 'CMNIST dataset' and discusses 'train' and 'test' distributions conceptually, but it does not provide specific percentages or counts for training, validation, or test splits for the experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to conduct the experiments. |
| Software Dependencies | No | The paper refers to using 'neural network architectures' and 'Adam' for optimization, but it does not specify any software dependencies with their exact version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Further details regarding the neural network architectures, objectives, optimization and specific hyper-parameters can be found in appendix E. |