Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Understanding and Improving Feature Learning for Out-of-Distribution Generalization

Authors: Yongqiang Chen, Wei Huang, Kaiwen Zhou, Yatao Bian, Bo Han, James Cheng

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that Fe AT effectively learns richer features thus boosting the performance of various OOD objectives.
Researcher Affiliation Collaboration 1The Chinese University of Hong Kong 2RIKEN AIP EMAIL EMAIL Yatao Bian3, Bo Han4, James Cheng1 3Tencent AI Lab 4Hong Kong Baptist University EMAIL EMAIL
Pseudocode Yes Algorithm 1 Fe AT: Feature Augmented Training
Open Source Code Yes 1Code is available at https://github.com/LFhase/Fe AT.
Open Datasets Yes We conduct extensive experiments on both COLOREDMNIST [4, 16] and 6 datasets from the challenging benchmark, WILDS [39]
Dataset Splits Yes Table 8: A summary of datasets statistics from WILDS. Dataset # Examples # Domains train val test train val test
Hardware Specification Yes We run all the experiments on Linux servers with NVIDIA V100 graphics cards with CUDA 10.2.
Software Dependencies Yes We run all the experiments on Linux servers with NVIDIA V100 graphics cards with CUDA 10.2.
Experiment Setup Yes We use the Adam [37] optimizer with a learning rate of 1e 3 and a weight decay of 1e 3.