Multi-level Consistency Learning for Semi-supervised Domain Adaptation
Authors: Zizheng Yan, Yushuang Wu, Guanbin Li, Yipeng Qin, Xiaoguang Han, Shuguang Cui
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we verified the effectiveness of our MCL framework on three popular SSDA benchmarks, i.e., Vis DA2017, Domain Net, and Office-Home datasets, and the experimental results demonstrate that our MCL framework achieves the state-of-the-art performance. |
| Researcher Affiliation | Academia | 1Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen 2Sun Yat-sen University 3Cardiff University {zizhengyan@link, yushuangwu@link, hanxiaoguang@, shuguangcui@}.cuhk.edu.cn, liguanbin@mail.sysu.edu.cn, qiny16@cardiff.ac.uk |
| Pseudocode | No | No pseudocode or algorithm block was found. |
| Open Source Code | Yes | Code is available at https://github.com/chester256/MCL. |
| Open Datasets | Yes | We evaluate our proposed MCL on several popular benchmark datasets, including Vis DA2017 [Peng et al., 2017], Domain Net [Peng et al., 2019], and Office Home [Venkateswara et al., 2017]. |
| Dataset Splits | No | No explicit train/validation/test split percentages or counts were provided; it mentions '1-shot and 3-shot experiments' referring to the number of labeled target samples per class. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments were found. Only a general mention of 'High Performance Computing Services' was present. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'POT' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Similar to [Li et al., 2021a], the threshold τ (Eq. 12) is set as 0.95, and we use the softmax temperature T to control the sharpness of the prediction for the thresholding operation (1 for Domainnet, 1.25 for Office-Home and Vis DA). The loss weight balancing hyperparameters λ1 is set as 1, and λ2 is set to 1 for Domain Net, 0.2 for Office-Home , and 0.1 for Vis DA. We use Random Flip and Random Crop as the augmentation methods for view A and Rand Augment [Cubuk et al., 2020] for view B. Moreover, the momentum m used to update source prototypes is set to 0.9. |