Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation

Authors: Cheng Feng, Chaoliang Zhong, Jie Wang, Ying Zhang, Jun Sun, Yasuto Yokota

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct experiments on a wide range of real-world datasets. We modify three offline UDA algorithms, i.e., DANN [Ganin et al., 2016], CDAN [Long et al., 2017], and MCC [Jin et al., 2020], and evaluate their performance on OUDA tasks.
Researcher Affiliation Industry 1Fujitsu R&D Center, Co., LTD 2Fujitsu LTD
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is available in https://github.com/Fujitsu Research/sourcefeature-distillation.
Open Datasets Yes As the most prevalent benchmark dataset in DA tasks, we conduct experiments on Office-Home [Venkateswara et al., 2017], Office-31 [Saenko et al., 2010] and Image CLEF-DA [Long et al., 2017].
Dataset Splits No The whole target domain is divided into a sequential of sub-domains with a batch size of 36 by randomly selecting. We follow the settings in the literature for online training [Kirkpatrick et al., 2017; Mc Mahan et al., 2013] where the algorithms have no access to the target data arrived in previous steps.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments.
Software Dependencies No We start OUDA tasks with a source-only (SO) model which trains from scratch with Res Net-50 [He et al., 2016] implemented by Pytorch. The paper mentions 'Pytorch' but does not specify a version number or other software dependencies with versions.
Experiment Setup Yes The training epoch for each step is set as 20. We adopt mini-batch SGD with a momentum of 0.9 and the learning rate annealing strategy as [Ganin et al., 2016]. For all tasks, we use the same hyper-parameter where α = 1