Model-Based Domain Generalization
Authors: Alexander Robey, George J. Pappas, Hamed Hassani
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we report improvements of up to 30% over state-of-the-art domain generalization baselines on several benchmarks including Colored MNIST, Camelyon17-WILDS, FMo W-WILDS, and PACS. |
| Researcher Affiliation | Academia | Department of Electrical and Systems Engineering University of Pennsylvania {arobey1,pappasg,hassani}@seas.upenn.edu |
| Pseudocode | Yes | Algorithm 1 Model-Based Domain Generalization (MBDG) |
| Open Source Code | Yes | Our code is publicly available at the following link: https://github.com/arobey1/mbdg. |
| Open Datasets | Yes | In our experiments, we report improvements of up to 30% over state-of-the-art domain generalization baselines on several benchmarks including Colored MNIST, Camelyon17-WILDS, FMo W-WILDS, and PACS. |
| Dataset Splits | Yes | Model selection for each of these datasets was performed using hold-one-out cross-validation. For Camelyon17-WILDS and FMo W-WILDS, we use the repository provided with the WILDS dataset suite, and we perform model-selection using the out-of-distribution validation set provided in the WILDS repository. |
| Hardware Specification | No | The paper states "Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes]" in the ethics checklist, but the provided text does not contain specific hardware details like GPU/CPU models or memory. |
| Software Dependencies | No | The paper does not explicitly state specific software dependencies with version numbers. |
| Experiment Setup | Yes | Further details concerning hyperparameter tuning and model selection are deferred to Appendix E. (3.b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] |