Reducing the Covariate Shift by Mirror Samples in Cross Domain Alignment
Authors: Yin Zhao, minquan wang, Longjun Cai
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | By applying the virtual mirror and mirror loss to the generic unsupervised domain adaptation model, we achieved consistently superior performance on several mainstream benchmarks. |
| Researcher Affiliation | Industry | Yin Zhao Alibaba Group yinzhao.zy@alibaba-inc.com Minquan Wang Alibaba Group minquan.wmq@alibaba-inc.com Longjun Cai Alibaba Group longjun.clj@alibaba-inc.com |
| Pseudocode | Yes | The detailed algorithm can be found in Appendix. |
| Open Source Code | No | The paper does not provide any statement or link indicating that its source code is open or publicly available. |
| Open Datasets | Yes | Datasets. We use Office-31 [42], Office-Home[51], Image CLEF and Vis DA2017[39] to validate our proposed method. |
| Dataset Splits | No | The paper mentions using 'validation' as a target domain for VisDA2017, but it does not provide explicit percentages or sample counts for training, validation, or test splits for any of the datasets used in the main text. |
| Hardware Specification | Yes | All the experiments are carried out on one Tesla V100 GPU. |
| Software Dependencies | No | We implement our model in Py Torch. (No version specified for PyTorch or other libraries) |
| Experiment Setup | Yes | The learning rate is adjusted by ηp = η0(1 + αp) β like [17], where p is the epoch which is normalized in [0, 1], η0 = 0.001, α = 10 and β = 0.75. The learning rate of fully connected layers is 10 times of the backbone layers. [...] (K = 3) [...] λ = 1.0. |