Discriminative Adversarial Domain Adaptation
Authors: Hui Tang, Kui Jia5940-5947
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show the efficacy of our proposed methods and we achieve the new state of the art for all the three settings on benchmark datasets. |
| Researcher Affiliation | Academia | Hui Tang, Kui Jia South China University of Technology eehuitang@mail.scut.edu.cn, kuijia@scut.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We implement all our methods by Py Torch. The code will be available at https://github.com/huitangtang/DADA-AAAI2020. |
| Open Datasets | Yes | Office-31 (Saenko et al. 2010) is a popular benchmark domain adaptation dataset... Syn2Real (Peng et al. 2018) is the largest benchmark. |
| Dataset Splits | Yes | We follow standard evaluation protocols for unsupervised domain adaptation (Ganin et al. 2016; Wang et al. 2019): we use all labeled source and all unlabeled target instances as the training data. ... We use the training domain as the source domain and validation one as the target domain. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper states 'We implement all our methods by Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We follow DANN (Ganin et al. 2016) to use the SGD training schedule: the learning rate is adjusted by ηp = η0 (1+αp)β , where p denotes the process of training iterations that is normalized to be in [0, 1], and we set η0 = 0.0001, α = 10, and β = 0.75; the hyper-parameter λ is initialized at 0 and is gradually increased to 1 by λp = 2 1+exp( γp) 1, where we set γ = 10. We empirically set q = 0.1. |