Visual Domain Adaptation by Consensus-Based Transfer to Intermediate Domain

Authors: Jongwon Choi, Youngjoon Choi, Jihoon Kim, Jinyeop Chang, Ilhwan Kwon, Youngjune Gwon, Seungjai Min10655-10662

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the proposed framework in hard adaptation scenarios with real-world datasets from simple synthetic domains to complex real-world domains. The proposed algorithm outperforms the previous state-of-the-art algorithms on various environments. Implementation The number of multiple classifiers was set to Nc = 5. The learning rates for the feature generator and the classifiers were set to 0.0002, while the learning rates of the relatively small image adaptors are set to 2e 6. The decay weight loss was added to all the optimization steps with a loss scale of 0.0005. The detail network architectures are given in Appendix A of the supplementary document1. We used Py Torch (Paszke et al. 2017) to implement the proposed framework. The computational environment had an Intel i7-8700 CPU @ 3.20GHz, 32GB RAM, and an NVIDIA GTX 1080 Ti GPU. Ablation tests To analyze the proposed framework of Intermediate domain transfer and Ensemble classifier for Domain Adap-tation (IEDA), we additionally implement several variants of IEDA:
Researcher Affiliation Industry Jongwon Choi, Youngjoon Choi, Jihoon Kim, Jinyeop Chang, Ilhwan Kwon, Youngjune Gwon, Seungjai Min Samsung SDS. Y. Choi was with Samsung SDS while working on this paper.
Pseudocode Yes Algorithm 1: Sequential training method Input: Xs, Ys, Xt, {G, C1,..., CNc}, {G , C 1 ,..., C Nc} while not at the maximum iteration do Update G, C1,...,CNc using Eq. 7 Update C1,...,CNc using Eq. 8 Update At, G using Eq. 9 Estimate Y t using Eq. 10 Update G , Cinv 1 ,...,C Nc using Eq. 11 Update C 1 ,...,C Nc using Eq. 12 Update G using Eq. 13 Update As using Eq. 14 end
Open Source Code No The paper mentions 'The detail network architectures are given in Appendix A of the supplementary document1.' with a footnote linking to 'https://sites.google.com/site/jwchoivision/'. This link leads to a personal website for supplementary material, not a direct code repository or an explicit statement of code release for the work described.
Open Datasets Yes With digit datasets of MNIST (Le Cun et al. 1998), SVHN (Netzer et al. 2011), and USPS (Le Cun et al. 1998) We evaluate the proposed framework on object classification datasets: CIFAR10 (Krizhevsky and Hinton 2009), STL10 (Coates, Ng, and Lee 2011), and Vis DA Dataset (Peng et al. 2017). To show the robustness of the proposed framework for challenging problems, we applied the proposed algorithm to adapt the recognition of Pneumonia from an aligned dataset (Kermany et al. 2018) into a misaligned dataset (Wang et al. 2017)
Dataset Splits No The paper refers to 'train data' and the process of 'validating the proposed framework', but it does not specify concrete training/validation/test dataset splits (e.g., percentages or sample counts) or reference predefined splits for reproducibility.
Hardware Specification Yes The computational environment had an Intel i7-8700 CPU @ 3.20GHz, 32GB RAM, and an NVIDIA GTX 1080 Ti GPU.
Software Dependencies No The paper states 'We used Py Torch (Paszke et al. 2017) to implement the proposed framework.' While PyTorch is mentioned, a specific version number for PyTorch or any other software dependency is not provided, only a citation for the framework.
Experiment Setup Yes The number of multiple classifiers was set to Nc = 5. The learning rates for the feature generator and the classifiers were set to 0.0002, while the learning rates of the relatively small image adaptors are set to 2e 6. The decay weight loss was added to all the optimization steps with a loss scale of 0.0005.