Learning to Generalize across Domains on Single Test Samples
Authors: Zehao Xiao, Xiantong Zhen, Ling Shao, Cees G. M. Snoek
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct our experiments on four widely used benchmarks for domain generalization: PACS (Li et al., 2017), Office-Home (Venkateswara et al., 2017), rotated MNIST, and Fashion-MNIST (Piratla et al., 2020). |
| Researcher Affiliation | Collaboration | Zehao Xiao1, Xiantong Zhen1,2, Ling Shao3, Cees G. M. Snoek1 1AIM Lab, University of Amsterdam 2Inception Institute of Artificial Intelligence 3National Center for Artificial Intelligence, Saudi Data and Artificial Intelligence Authority |
| Pseudocode | Yes | We describe the the detailed training and test algorithm of our single sample generalization method in Algorithm 1 |
| Open Source Code | Yes | 1 Code available: https://github.com/zzzx1224/Single Sample Generalization-ICLR2022 |
| Open Datasets | Yes | We conduct our experiments on four widely used benchmarks for domain generalization: PACS (Li et al., 2017), Office-Home (Venkateswara et al., 2017), rotated MNIST, and Fashion-MNIST (Piratla et al., 2020). |
| Dataset Splits | Yes | We use the same training and validation split as in (Li et al., 2017) and follow the leave-one-out protocol from (Li et al., 2017; 2019; Carlucci et al., 2019). |
| Hardware Specification | Yes | We train all models on an NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions software like Res Net-18/50 and Adam optimization but does not provide specific version numbers for these or other software dependencies like Python or PyTorch. |
| Experiment Setup | Yes | During training, we use Adam optimization and train the model for 10,000 iterations. The learning rate of the backbone is set to 0.00005 for Res Net-18 and 0.00001 for Res Net-50, while the learning rate of the network for generating the classifier is set to 0.0001 consistently. The batch size is 128. |