Leveraging unlabeled data to predict out-of-distribution performance

Authors: Saurabh Garg, Sivaraman Balakrishnan, Zachary Chase Lipton, Behnam Neyshabur, Hanie Sedghi

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data. We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model s confidence, predicting accuracy as the fraction of unlabeled examples for which model confidence exceeds that threshold. ATC outperforms previous methods across several model architectures, types of distribution shifts (e.g., due to synthetic corruptions, dataset reproduction, or novel subpopulations), and datasets (WILDS, Image Net, BREEDS, CIFAR, and MNIST). In our experiments, ATC estimates target performance 2 4ˆ more accurately than prior methods.
Researcher Affiliation Collaboration Saurabh Garg Carnegie Mellon University sgarg2@andrew.cmu.edu Sivaraman Balakrishnan Carnegie Mellon University sbalakri@andrew.cmu.edu Zachary C. Lipton Carnegie Mellon University zlipton@andrew.cmu.edu Behnam Neyshabur Google Research, Blueshift team neyshabur@google.com Hanie Sedghi Google Research, Brain team hsedghi@google.com
Pseudocode No The provided text does not contain any structured pseudocode or algorithm blocks.
Open Source Code No While, we have not released code yet, the appendix provides all the necessary details to replicate our experiments and results. Moreover, we plan to release the code with a revised version of the manuscript.
Open Datasets Yes ATC outperforms previous methods across several model architectures, types of distribution shifts (e.g., due to synthetic corruptions, dataset reproduction, or novel subpopulations), and datasets (WILDS, Image Net, BREEDS, CIFAR, and MNIST).
Dataset Splits Yes Throughout this paper, we assume we have access to the following: (i) model f; (ii) previously-unseen (validation) data from DS; and (iii) unlabeled data from target distribution DT.
Hardware Specification Yes All experiments were run on NVIDIA Tesla V100 GPUs.
Software Dependencies No The paper mentions software like PyTorch, Adam optimizer, and Distil BERT-base-uncased models with citations, but it does not specify concrete version numbers for these software components, which is required for reproducibility.
Experiment Setup Yes CIFAR10 and CIFAR100 We train Dense Net121 and Res Net18 architectures from scratch. We use SGD training with momentum of 0.9 for 300 epochs. We start with learning rate 0.1 and decay it by multiplying it with 0.1 every 100 epochs. We use a weight decay of 5 4. We use batch size of 200.