Bi-Directional Generation for Unsupervised Domain Adaptation

Authors: Guanglei Yang, Haifeng Xia, Mingli Ding, Zhengming Ding6615-6622

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify that our proposed model outperforms the state-of-the-art on standard cross domain visual benchmarks. We evaluate BDG method by the standard benchmarks including Office-31 and Office-Home, compared with state of the art domain adaption methods.
Researcher Affiliation Academia School of Instrument science and engineering, Harbin Institute of Technology Department of Electrical & Computer Engineering, Indiana University-Purdue University Indianapolis Department of Computer, Information and Technology, Indiana University-Purdue University Indianapolis {yangguanglei, dingml}@hit.edu.cn, {haifxia, zd2}@iu.edu
Pseudocode No The paper describes the algorithm through prose and mathematical equations in the 'The Proposed Algorithm' section (e.g., 'Step A First, we train classifier C0...'). However, it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes We evaluate BDG method by the standard benchmarks including Office-31 and Office-Home, compared with state of the art domain adaption methods. Datasets and Experimental Setup Office-31 (Saenko et al. 2010), a standard benchmark for visual domain adaptation... Office-Home (Venkateswara et al. 2017) is a more challenging dataset for domain adaptation evaluation.
Dataset Splits No We follow the standard evaluation protocols for unsupervised domain adaptation (Ganin et al. 2016; Long et al. 2015). This statement refers to general protocols but does not provide specific percentages or counts for the train/validation/test splits used in their experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software components like 'Res Net-50', 'SGD', and 'Adam', but it does not specify exact version numbers for these or other software dependencies, such as deep learning frameworks or operating systems.
Experiment Setup Yes Implementation Details The hyper-parameters λ,γ in the equation(7) are selected as 1 throughout all experiments. We use Res Net-50(He et al. 2016) models pre-trained on the Image Net dataset (Russakovsky et al. 2015) as the backbone and we remove its last FC layer. And We fine-tune all convolutional and pooling layers and apply back-propagation to train the classifiers and generators. There optimizer of the classifiers Cs, Ct is mini-batch stochastic gradient descent (SGD) with the momentum of 0.9. At the same time, we use adaptive moment estimation (Adam) to train the generators Gs, Gt, as in (Salimans et al. 2016). And the learning rate is set to 5.0 10 4 in all experiments. We report the accuracy result after 20,000 iterations.