Model Agnostic Sample Reweighting for Out-of-Distribution Learning
Authors: Xiao Zhou, Yong Lin, Renjie Pi, Weizhong Zhang, Renzhe Xu, Peng Cui, Tong Zhang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present theoretical analysis in linear case to prove the insensitivity of MAPLE to model size, and empirically verify its superiority in surpassing state-of-the-art methods by a large margin. Code is available at https: //github.com/x-zho14/MAPLE. ... In this section, we conduct a series of experiments to justify the superiority of our MAPLE in IRM and DRO. |
| Researcher Affiliation | Collaboration | 1The Hong Kong University of Science and Technology 2Tsinghua University 3Google Research. |
| Pseudocode | Yes | Algorithm 1 Model Agnostic Sample Reweighting (MAPLE) |
| Open Source Code | Yes | Code is available at https: //github.com/x-zho14/MAPLE. |
| Open Datasets | Yes | For IRM experiments, Colored MNIST is the most widely used benchmark in IRM and Colored Object, CIFARMNIST are adopted to showcase the superior performance of MAPLE on more challenging largescale settings (Arjovsky et al., 2019; Krueger et al., 2021b; Ahuja et al., 2020; Zhang et al., 2021a). We adopt two popular vision datasets, Waterbirds and Celeb A, to validate the effectiveness of MAPLE on DRO problems (Wah et al., 2011; Sagawa et al., 2019; Liu et al., 2015; Sagawa et al., 2019; Liu et al., 2021a; Lin et al., 2021). |
| Dataset Splits | Yes | We split 10% training data as the validation dataset. |
| Hardware Specification | No | Table 4 lists 'GPUs' with a quantity (e.g., '8' for Celeb A), but does not specify the model or type of GPUs, CPUs, or other hardware components used. |
| Software Dependencies | No | Table 4 mentions optimizers like 'Adam' and 'SGD' but does not provide specific version numbers for any software dependencies, libraries, or programming languages. |
| Experiment Setup | Yes | Table 4. Experimental Configurations of MAPLE. The table includes 'Batch Size', 'Outer Iterations', 'Inner Training Schedule', 'Sample Weight Learning Rate', 'Sample Probability Learning Rate', 'Model Parameter Learning Rate', 'Model Parameter Weight Decay'. |