Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation
Authors: Xiang Jiang, Qicheng Lao, Stan Matwin, Mohammad Havaei
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results and ablation studies confirm the effectiveness of the proposed approach, especially in the presence of within-domain class imbalance and between-domain class distribution shift. and 4. Experiments Datasets. We evaluate on Office-31, Office-Home and Vis DA2017. |
| Researcher Affiliation | Collaboration | 1Imagia, Canada 2Dalhousie University, Canada 3Mila, Universit e de Montr eal, Canada 4Polish Academy of Sciences, Poland. |
| Pseudocode | Yes | Algorithm 1 The proposed implicit alignment training |
| Open Source Code | Yes | Code: https://github.com/xiangdal/implicit_alignment |
| Open Datasets | Yes | Datasets. We evaluate on Office-31 (Saenko et al., 2010), Office-Home (Venkateswara et al., 2017) and Vis DA2017 (synthetic real) (Peng et al., 2017) |
| Dataset Splits | No | The paper uses well-known datasets but does not explicitly provide specific training, validation, or test dataset split percentages or counts for its experiments. |
| Hardware Specification | Yes | Xiang Jiang acknowledges the support of NVIDIA Corporation with the donation of the Titan X GPU used for this research. |
| Software Dependencies | No | The paper mentions deep learning models and datasets but does not provide specific version numbers for software dependencies like PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | The batch size is 31 for Office-31 and 50 for Office-Home. and We only update pseudo-labels periodically, i.e., every 20 steps, instead of at every training step. |