Multi-Source Distilling Domain Adaptation
Authors: Sicheng Zhao, Guangzhi Wang, Shanghang Zhang, Yang Gu, Yaxian Li, Zhichao Song, Pengfei Xu, Runbo Hu, Hua Chai, Kurt Keutzer12975-12983
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on public DA benchmarks, and the results demonstrate that the proposed MDDA significantly outperforms the state-of-the-art approaches. Our source code is released at: https://github.com/daoyuan98/MDDA. and Experiments We evaluate the proposed MDDA model on multi-source domain adaptation task in visual classification applications, including digit recognition and object classification. |
| Researcher Affiliation | Collaboration | Sicheng Zhao,1 # Guangzhi Wang,2# Shanghang Zhang,1# Yang Gu,2 Yaxian Li,2,3 Zhichao Song,2 Pengfei Xu,2 Runbo Hu,2 Hua Chai,2 Kurt Keutzer1 1University of California, Berkeley, USA, 2Didi Chuxing, China, 3Renmin University of China, China |
| Pseudocode | No | The paper describes the proposed MDDA framework and its four stages in textual descriptions and a diagram (Figure 2), but it does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is released at: https://github.com/daoyuan98/MDDA. |
| Open Datasets | Yes | Benchmarks Digits-five includes 5 digit image datasets sampled from different domains, including handwritten mt (MNIST) (Le Cun et al. 1998), combined mm (MNIST-M) (Ganin and Lempitsky 2015), street image sv (SVHN) (Netzer et al. 2011), synthetic sy (Synthetic Digits) (Ganin and Lempitsky 2015), and handwritten up (USPS) (Hull 1994). Office-31 (Saenko et al. 2010) contains 4,110 images within 31 categories, which are collected from office environment in 3 image domains: A (Amazon) downloaded from amazon.com, W (Webcam) and D (DSLR) taken by web camera and digital SLR camera, respectively. |
| Dataset Splits | Yes | Following (Xu et al. 2018; Peng et al. 2019), we sample 25,000 images for training and 9,000 for testing in mt, mm, sv, sy, and select the entire 9,298 images in up as a domain. |
| Hardware Specification | No | The paper describes the model architecture and experimental setup (e.g., backbone networks, alpha value), but does not specify any hardware components such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions using Alexnet as a backbone and mathematical formulations for losses and distances, but it does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers that would be necessary for replication. |
| Experiment Setup | Yes | In Digits-five experiments, we use three convlutional layers and two fully connected layers as encoder and one fully connected layer as classifier. In Office-31 experiments, we use Alexnet as our backbone. The last layer is used as classifier and the other layers are used as encoder. Following (Gulrajani et al. 2017), we set α in Eq. (5) to 10. |