Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head

Authors: Alan Wang, Minh Nguyen, Mert Sabuncu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach on three challenging real-world domain generalization tasks in computer vision. and 5 Experiments and Results and Table 2: Metric average standard deviation for all datasets (%). Higher is better. Bold is best and underline is second-best.
Researcher Affiliation Academia Alan Q. Wang Cornell University Minh Nguyen Cornell University Mert R. Sabuncu Cornell University
Pseudocode No The paper describes the proposed approach and optimization details in text, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/alanqrwang/nwhead.
Open Datasets Yes We experiment on 3 real-world domain generalization tasks. Two are from the WILDS benchmark [23], and the third is a challenging melanoma detection task. Details on the datasets are summarized in Table 1, and further information is provided in the Appendix. and 1. The Camelyon-17 dataset [4]... 2. The melanoma dataset is from the International Skin Imaging Collaboration (ISIC) archive5... 3. The Functional Map of the World (FMo W) dataset [6]...
Dataset Splits Yes For WILDS datasets, we follow all hyperparameters, use model selection techniques, and report metrics as specified by the benchmark. ... Similarly for ISIC, we use a pretrained Res Net-50 backbone as φ with no augmentations, and perform model selection on an OOD validation set. and The training dataset is drawn from the first 3 hospitals, while out-of-distribution validation and out-of-distribution test datasets are sampled from the 4th hospital and 5th hospital, respectively.
Hardware Specification Yes All training and inference is done on an Nvidia A6000 GPU and all code is written in Pytorch.6.
Software Dependencies No The paper mentions 'Pytorch' as the software used, but does not specify a version number for it or any other software dependencies.
Experiment Setup Yes Table 3: Hyperparameter settings for various datasets. and For each model variant, we train 5 separate models with different random seeds, and perform model selection on an out-of-distribution (OOD) validation set.