Cycle Self-Refinement for Multi-Source Domain Adaptation
Authors: Chaoyang Zhou, Zengmao Wang, Bo Du, Yong Luo
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on Office-31, Office-Home and Domain Net show that the proposed method outperforms the state-of-the-art methods for most tasks. |
| Researcher Affiliation | Academia | 1 School of Computer Science, Wuhan University 2 National Engineering Research Center for Multimedia Software, Wuhan University 3 Institute of Artificial Intelligence, Wuhan University 4 Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University 5 Hubei Luojia Laboratory, China zhoucy,wangzengmao,dubo,luoyong@whu.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is released at https://github.com/zcy866/CSR. |
| Open Datasets | Yes | Three popular benchmark datasets are adopted in the experiments, i.e. Office-31 (Saenko et al. 2010), Office-Home (Venkateswara et al. 2017) and Domain Net (Peng et al. 2019). |
| Dataset Splits | No | The paper states how domains are used for source and target roles, but does not explicitly provide percentages or counts for training, validation, or test splits within the datasets. |
| Hardware Specification | Yes | Experiments are done on Nvidia V100. |
| Software Dependencies | No | The paper mentions Resnet-50 and Resnet-101, and Rand Augment, but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Meanwhile, for trade-off parameters β and filtering threshold τ, we set (0.7, 0.9) for Office31 and Office-home, and (0.7, 0.6) for Domain Net. The Rand Augment(Cubuk et al. 2020) is adopted as data augmentation for T. We utilize the same learning rate and schedule as (Zhu, Zhuang, and Wang 2019). |