POEM: Polarization of Embeddings for Domain-Invariant Representations
Authors: Sang-Yeong Jo, Sung Whan Yoon
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive simulation results in popular DG benchmarks with the PACS, VLCS, Office Home, Terra Incognita, and Domain Net datasets show that POEM indeed facilitates the category-classifying embedding to be more domain-invariant. |
| Researcher Affiliation | Academia | Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology (UNIST), Republic of Korea jsy7058@unist.ac.kr, shyoon8@unist.ac.kr |
| Pseudocode | Yes | Algorithm 1 presents the pseudocode of POEM. |
| Open Source Code | Yes | Our experiments are run on the Domain Bed framework of (Gulrajani and Lopez-Paz 2021), which is publicly released under the MIT license to evaluate the existing DG methods1. 1Code is available at github.com/Jo Sang Young/Official-POEM |
| Open Datasets | Yes | We have conducted extensive experiments to evaluate POEM on the five popular domain generalization (DG) benchmarks based on PACS (Li et al. 2017), VLCS (Fang, Xu, and Rockmore 2013), Office Home (Venkateswara et al. 2017), Terra Incognita (Beery, Van Horn, and Perona 2018), and Domain Net (Peng et al. 2019). |
| Dataset Splits | Yes | We follow the training and evaluation protocols of Domain Bed of (Gulrajani and Lopez-Paz 2021). Also, we follow the data splitting introduced by the work of SWAD (Cha et al. 2021). |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact CPU/GPU models, processor types, or memory amounts used for running its experiments. It only vaguely mentions 'lack of memory in our simulation'. |
| Software Dependencies | No | The paper mentions using the 'Domain Bed framework' and 'ResNet50 architecture' but does not specify software versions for programming languages, libraries, or other dependencies (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | We set the number of training iterations of POEM to be the same as the experiments done in (Cha et al. 2021), i.e., PACS: 5,000, VLCS: 5,000, Office Home: 5,000, Terra Incognita: 5,000, Domain Net: 15,000 iterations... A mini-batch contains 32 images from each source domain in benchmark datasets. Due to the lack of memory in our simulation, a mini-batch for the Domain Net case contains 20 images for each source domain. |