An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers
Authors: Ramakrishna Vedantam, David Lopez-Paz, David J. Schwab
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform a large scale empirical study testing the theory from Ben-David et al. (2007, 2010) on deep neural networks trained on the Domain Bed (Gulrajani & Lopez-Paz, 2020) domain generalization benchmark. |
| Researcher Affiliation | Collaboration | Ramakrishna Vedantam FAIR, New York ramav@fb.com; David Lopez-Paz FAIR, Paris dlp@fb.com; David J. Schwab ITS, CUNY Grad Center FAIR, New York davidjschwab@gmail.com |
| Pseudocode | No | The paper describes methods in prose and does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks within its main text. |
| Open Source Code | Yes | We do include one set of model weights and instructions to run the measures on the given model in the supplementary material. |
| Open Datasets | Yes | It also provides various datasets such as Rotated MNIST (Ghifary et al., 2015), VLCS (Fang et al., 2013), and PACS (Li et al., 2017b). |
| Dataset Splits | Yes | For both source S and target T, we hold out 50% of the data for validation. |
| Hardware Specification | Yes | We train approximately 12,000 models on a compute cluster with Volta GPUs using Py Torch (Paszke et al., 2019). |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2019)' but does not provide a specific version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | for each combination of dataset and training environments, we pick 100 random hyperparameter settings of batch size, learning rate, weight decay, and dropout (for resnet models). All models are trained for 5000 training steps and the model saved at the last step is used for analysis. |