Unknown Domain Inconsistency Minimization for Domain Generalization

Authors: Seungjae Shin, HeeSun Bae, Byeonghu Na, Yoon-Yeong Kim, Il-chul Moon

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In an empirical aspect, UDIM consistently outperforms SAM variants across multiple DG benchmark datasets. Notably, UDIM shows statistically significant improvements in scenarios with more restrictive domain information, underscoring UDIM s generalization capability in unseen domains.
Researcher Affiliation Collaboration Seungjae Shin 1, Hee Sun Bae 1, Byeonghu Na1, Yoon-Yeong Kim2 & Il-Chul Moon1,3 1Department of Industrial and Systems Engineering, KAIST 2Department of Statistics, University of Seoul, 3summary.ai
Pseudocode Yes Algorithm of UDIM is in Appendix C. ... Algorithm 1: Training algorithm of UDIM w/ SAM
Open Source Code Yes Our code is available at https://github.com/SJShin-AI/UDIM.
Open Datasets Yes First, we conducted evaluation on CIFAR10-C (Hendrycks & Dietterich, 2019), a synthetic dataset that emulates various domains by applying several synthetic corruptions to CIFAR-10 (Krizhevsky et al., 2009). Furthermore, we extend our evaluation to real-world datasets with multiple domains, namely PACS (Li et al., 2017), Office Home (Venkateswara et al., 2017), and Domain Net (Peng et al., 2019).
Dataset Splits Yes We get the test performance whose accuracy for source validation dataset is best.
Hardware Specification No The paper mentions using ResNet-18 and ResNet-50 models and the Adam optimizer but does not specify the hardware (e.g., GPU models, CPU types) used for the experiments.
Software Dependencies No We utilize Back PACK (Dangel et al., 2020), which provides the faster computation of per-sample gradients. ... and use Adam (Kingma & Ba, 2014) optimizer basically. While Back PACK and Adam are mentioned, specific version numbers for these or other key software components (like Python, PyTorch) are not provided.
Experiment Setup Yes learning rate is set as 3 10 5 following Wang et al. (2023). ... we use batch size as 32 for PACS, Office Home, and Domain Net and 64 for CIFAR-10-C. ... we trained for total of 5,000 iterations. For Domain Net, we trained for 15,000 iterations. For CIFAR-10, since it usually trains for 100 epochs, we translate it to iterations, which becomes total of 781 * 100 = 78,100 iterations.