Statistical Learning under Heterogeneous Distribution Shift
Authors: Max Simchowitz, Anurag Ajay, Pulkit Agrawal, Akshay Krishnamurthy
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in simpler features across numerous domains. |
| Researcher Affiliation | Collaboration | 1CSAIL, Massachusetts Institute of Technology 2Microsoft Research, New York City. |
| Pseudocode | No | The paper contains theoretical derivations and experimental descriptions but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using third-party GitHub repositories for running experiments (e.g., 'We used the github repo https://github.com/kohpangwei/group_DRO for running our experiments.'), but does not provide open-source code for the specific methodology or theoretical contributions described in the paper. |
| Open Datasets | Yes | We test our hypothesis on the paradigmatic Waterbird dataset (Sagawa et al., 2019)., Appendix F.3 applies the same methodology to the Functional Map of World (FMo W) dataset (Koh et al., 2021)..., The Celeb A dataset (Liu et al., 2015) consists of celebrity faces... |
| Dataset Splits | No | The paper mentions training and test sets (e.g., 'We train them on standard training set of waterbird dataset and test them on a sampled test set'), but it does not explicitly specify detailed train/validation/test splits, percentages, or cross-validation setups. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used to conduct the experiments (e.g., CPU/GPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions various models and optimizers (e.g., 'Adam optimizer (Kingma and Ba, 2014)', 'Res Net50 model (He et al., 2016)'), but it does not specify any software library names with their corresponding version numbers (e.g., 'PyTorch 1.9' or 'TensorFlow 2.x'). |
| Experiment Setup | Yes | We train hθ with Adam optimizer (Kingma and Ba, 2014) for 100 epochs using a learning rate of 0.001 and a batch size of 50. |