Domain Generalisation via Domain Adaptation: An Adversarial Fourier Amplitude Approach

Authors: Minyoung Kim, Da Li, Timothy Hospedales

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On the Domain Bed benchmark including the large-scale Domain Net dataset, the proposed approach yields significantly improved domain generalisation performance over the state-of-the-art. Our empirical results show clear improvement on previous state-of-the-arts on the rigorous Domain Bed benchmark.
Researcher Affiliation Collaboration Minyoung Kim1, Da Li1 & Timothy M. Hospedales1,2 1Samsung AI Center Cambridge, UK 2University of Edinburgh, UK
Pseudocode Yes The pseudo code of our AGFA algorithm with the SWAD strategy is summarised in Alg. 1 (in Appendix).
Open Source Code No The paper does not contain an explicit statement about the public release of the source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We test our approach on the Domain Bed benchmark (Gulrajani & Lopez-Paz, 2021), including: PACS (Li et al., 2017), VLCS (Fang et al., 2013), Office Home (Venkateswara et al., 2017), Terra Incognita (Beery et al., 2018), and Domain Net (Peng et al., 2019).
Dataset Splits Yes For each dataset, we adopt the standard leave-one-domain-out source/target domain splits. The hyperparameters introduced in our model (e.g., SMCD trade-off η) and the general ones (e.g., learning rate, SWAD regime hyperparameters, maximum numbers of iterations) are chosen by grid search on the validation set according to the Domain Bed protocol (Gulrajani & Lopez-Paz, 2021).
Hardware Specification Yes Our model is trained by the Adam optimiser (Kingma & Ba, 2015) on machines with single Tesla V100 GPUs.
Software Dependencies No The paper mentions software components like 'Adam optimiser' and 'Res Net-50' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes The hyperparameters introduced in our model (e.g., SMCD trade-off η) and the general ones (e.g., learning rate, SWAD regime hyperparameters, maximum numbers of iterations) are chosen by grid search on the validation set according to the Domain Bed protocol (Gulrajani & Lopez-Paz, 2021). For instance, η = 0.1 for all datasets. The implementation details including chosen hyperparameters can be found in Appendix A.1. We adopt the Res Net50 (He et al., 2016) architecture... The input noise dimension for the generator is chosen as 100... The number of MC samples from Qλ(W) in the ELBO optimisation is chosen as 50. The optimisation hyperparameters are chosen by the same strategy as (Cha et al., 2021), where we employ the Adam optimiser (Kingma & Ba, 2015) with learning rate 5e-5, and no dropout, weight decay used. The batch size was 32...