Towards Accurate and Robust Domain Adaptation under Noisy Environments

Authors: Zhongyi Han, Xian-Jin Gui, Chaoran Cui, Yilong Yin

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A series of empirical studies show that our algorithm remarkably outperforms state of the art, over 10% accuracy improvements in some domain adaptation tasks under noisy environments.
Researcher Affiliation Academia Zhongyi Han1 , Xian-Jin Gui2 , Chaoran Cui3 and Yilong Yin1 1School of Software, Shandong University, Jinan 250101. China 2National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 3School of Computer Science and Technology, Shandong University of Finance and Economics
Pseudocode Yes Algorithm 1: Robust Domain Adaptation Algorithm.
Open Source Code Yes The code is at https://github.com/zhyhan/RDA.
Open Datasets Yes We evaluate our algorithm on three datasets against state of the art methods. [...] Office-Home is a more challenging domain adaptation dataset consisting of 15,599 images with 65 unbalanced classes. [...] Following the protocol in the pioneering work [Shu et al., 2019], we create corrupted counterparts on the above two clean datasets as follows. [...] Bing-Caltech is a real noisy dataset created with Bing and Caltech-256 datasets with 256 classes.
Dataset Splits No The paper describes training on a source domain and testing on a target domain. It mentions that setting a fixed threshold by cross-validation is unpractical for domain adaptation, and no explicit validation dataset split is described for hyperparameter tuning.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper mentions implementation in 'Pytorch' but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes We use Res Net50 as the representation function with parameters pre-trained from Image Net. The main classifier and adversarial classifier are both 2-layer neural networks. We set the early training epoch T to 30. α is set to 3 and β is set to 0.1 according to [Zhang et al., 2019]. We use mini-batch SGD with the Nesterov momentum 0.9. The initial learning rate of the classifiers f and f is 0.001, which is ten times than of the representation function ψ. The set of rk and τ depends on noise rates.