Understanding and Improving Feature Learning for Out-of-Distribution Generalization
Authors: Yongqiang Chen, Wei Huang, Kaiwen Zhou, Yatao Bian, Bo Han, James Cheng
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Fe AT effectively learns richer features thus boosting the performance of various OOD objectives. |
| Researcher Affiliation | Collaboration | 1The Chinese University of Hong Kong 2RIKEN AIP {yqchen,kwzhou,jcheng}@cse.cuhk.edu.hk wei.huang.vr@riken.jp Yatao Bian3, Bo Han4, James Cheng1 3Tencent AI Lab 4Hong Kong Baptist University yatao.bian@gmail.com bhanml@comp.hkbu.edu.hk |
| Pseudocode | Yes | Algorithm 1 Fe AT: Feature Augmented Training |
| Open Source Code | Yes | 1Code is available at https://github.com/LFhase/Fe AT. |
| Open Datasets | Yes | We conduct extensive experiments on both COLOREDMNIST [4, 16] and 6 datasets from the challenging benchmark, WILDS [39] |
| Dataset Splits | Yes | Table 8: A summary of datasets statistics from WILDS. Dataset # Examples # Domains train val test train val test |
| Hardware Specification | Yes | We run all the experiments on Linux servers with NVIDIA V100 graphics cards with CUDA 10.2. |
| Software Dependencies | Yes | We run all the experiments on Linux servers with NVIDIA V100 graphics cards with CUDA 10.2. |
| Experiment Setup | Yes | We use the Adam [37] optimizer with a learning rate of 1e 3 and a weight decay of 1e 3. |