Energy-Based Test Sample Adaptation for Domain Generalization
Authors: Zehao Xiao, Xiantong Zhen, Shengcai Liao, Cees G. M. Snoek
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on six benchmarks for classification of images and microblog threads demonstrate the effectiveness of our proposal. |
| Researcher Affiliation | Collaboration | 1AIM Lab, University of Amsterdam 2Inception Institute of Artificial Intelligence |
| Pseudocode | Yes | We provide the detailed training and test algorithm of our energy-based sample adaptation in Algorithm 1. |
| Open Source Code | Yes | 1 Code available: https://github.com/zzzx1224/EBTSA-ICLR2023. |
| Open Datasets | Yes | We conduct our experiments on five widely used datasets for domain generalization, PACS (Li et al., 2017), Office-Home (Venkateswara et al., 2017), Domain Net (Peng et al., 2019), and Rotated MNIST and Fashion-MNIST. ... PHEME (Zubiaga et al., 2016) |
| Dataset Splits | Yes | We use the same training and validation split as (Li et al., 2017) and follow their leaveone-out protocol. |
| Hardware Specification | Yes | We train all models on an NVIDIA Tesla V100 GPU for 10,000 iterations. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimization' and 'Distil BERT' but does not provide specific version numbers for any libraries or frameworks, which is required for reproducibility. |
| Experiment Setup | Yes | We use Adam optimization and train for 10,000 iterations with a batch size of 128. We set the learning rate to 0.00005 for Res Net-18, 0.00001 for Res Net-50, and 0.0001 for the energy-based model and classification model. We use 20 steps of Langevin dynamics sampling to adapt the target samples to source distributions, with a step size of 50. |