Tent: Fully Test-Time Adaptation by Entropy Minimization
Authors: Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results evaluate generalization to corruptions for image classification, to domain shift for digit recognition, and to simulation-to-real shift for semantic segmentation. For context with more data and optimization, we evaluate methods for robust training, domain adaptation, and self-supervised learning given the labeled source data. Tent can achieve less error given only the target data, and it improves on the state-of-the-art for the Image Net-C benchmark. |
| Researcher Affiliation | Collaboration | Dequan Wang1 , Evan Shelhamer2 , Shaoteng Liu1, Bruno Olshausen1, Trevor Darrell1 dqwang@cs.berkeley.edu, shelhamer@google.com UC Berkeley1 Adobe Research2 |
| Pseudocode | No | The paper describes the algorithm steps in text but does not provide pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | Please see the project page at https://github.com/DequanWang/tent for the code and more. |
| Open Datasets | Yes | For large-scale experiments we choose Image Net (Russakovsky et al., 2015)... For experiments at an accessible scale we choose CIFAR-10/CIFAR-100 (Krizhevsky, 2009)... For domain adaptation we choose SVHN (Netzer et al., 2011) as source and MNIST (Le Cun et al., 1998)/MNIST-M (Ganin & Lempitsky, 2015)/USPS (Hull, 1994) as targets... |
| Dataset Splits | Yes | For large-scale experiments we choose Image Net (Russakovsky et al., 2015), with 1,000 classes, a training set of 1.2 million, and a validation set of 50,000. |
| Hardware Specification | No | The paper does not specify the exact GPU or CPU models, or other detailed hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper states 'Our implementation is in Py Torch (Paszke et al., 2019) with the pycls library (Radosavovic et al., 2019)' but does not provide explicit version numbers for PyTorch or pycls. |
| Experiment Setup | Yes | On Image Net, we set BS = 64 and LR = 0.00025, and on other datasets we set BS = 128 and LR = 0.001. |