Bi-Classifier Determinacy Maximization for Unsupervised Domain Adaptation

Authors: Shuang Li, Fangrui Lv, Binhui Xie, Chi Harold Liu, Jian Liang, Chen Qin8455-8464

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that BCDM compares favorably against the existing state-of-the-art domain adaptation methods. We evaluate BCDM against many state-of-the-art algorithms on four domain adaptation datasets and two semantic segmentation datasets.
Researcher Affiliation Collaboration 1School of Computer Science and Technology, Beijing Institute of Technology, China 2Alibaba Group, China 3 Institute for Digital Communications, School of Engineering, University of Edinburgh, United Kingdom
Pseudocode Yes Algorithm 1 The Algorithm of BCDM for UDA.
Open Source Code No The paper mentions supplementary materials via an arXiv link for the paper itself ('https://arxiv.org/abs/2012.06995') but does not explicitly state that source code for the described methodology is provided or link directly to a code repository.
Open Datasets Yes We evaluate BCDM against many state-of-the-art algorithms on four domain adaptation datasets and two semantic segmentation datasets. Domain Net (Peng et al. 2019) is the largest and hardest dataset to date for visual domain adaptation... Vis DA-2017 (Peng et al. 2017)... Office-31 (Saenko et al. 2010)... Image CLEF... Cityscapes (Cordts et al. 2016)... GTA5 (Richter et al. 2016)...
Dataset Splits Yes Cityscapes (Cordts et al. 2016) is a real-word dataset with 5,000 urban scenes which are divided into training, validation and test sets; Deep Embedded Validation (You et al. 2019) is conducted to select hyper-parameters, and then we fix α = 0.01 in all experiments.
Hardware Specification No The paper does not provide specific details about the hardware used, such as GPU or CPU models. It only mentions using pre-trained ResNet models, which implies high-performance computing, but no concrete specifications.
Software Dependencies No The paper mentions optimizers (SGD), network architectures (ResNet, DeepLabv2), but does not provide specific version numbers for software dependencies or libraries used in the implementation.
Experiment Setup Yes For image classification, [...] learning rate 3 10 4, momentum 0.9 and weight decay 5 10 4. [...] For semantic segmentation, [...] initial learning rate is set as 2.5 10 4 with momentum 0.9 and weight decay 10 4. [...] the network is first trained with Lcls for 20k iterations and then fine-tune using Algorithm 1 for 40k iterations. [...] we fix α = 0.01 in all experiments.