Heuristic Domain Adaptation
Authors: Shuhao Cui, Xuan Jin, Shuhui Wang, Yuan He, Qingming Huang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on three challenging domain adaptation tasks, i.e., unsupervised domain adaptation, multi-source domain adaptation and semi-supervised domain adaptation. |
| Researcher Affiliation | Collaboration | 1Key Lab of Intell. Info. Process., Inst. of Comput. Tech., CAS 2University of Chinese Academy of Sciences 3Alibaba Group 4Peng Cheng Laboratory {cuishuhao18s, wangshuhui}@ict.ac.cn, {jinxuan.jx, heyuan.hy}@alibaba-inc.com qmhuang@ucas.ac.cn |
| Pseudocode | No | The paper describes algorithms and formulations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | The code is available at https://github.com/cuishuhao/HDA. |
| Open Datasets | Yes | In UDA, we use a standard dataset Office-Home [51] with 15,500 images in 65 categories. For evaluation on MSDA, we utilize a challenging dataset Domain Net [39], which contains about 600,000 images in 345 categories. In SSDA, we utilize the standard dataset proposed by MME [43], which is selected from Domain Net [39]. |
| Dataset Splits | Yes | We also show the results under different settings of M and ablation study in Table 2. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only mentioning general training settings. |
| Software Dependencies | No | The paper mentions 'Py Torch' as an implementation framework but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | We employ Stochastic Gradient Descent (SGD) with a momentum of 0.9 and a weight decay of 0.0005 to train our model. We choose Res Net101 [24] as the basic backbone, and set the initial learning rate as 0.0003. We use the Res Net34 [24] as the backbones of the generator and fix the number of sub-networks to be 3, and with the initial learning rate as 0.001. |