Self-paced Supervision for Multi-source Domain Adaptation

Authors: Zengmao Wang, Chaoyang Zhou, Bo Du, Fengxiang He

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Office31, Office-Home and Domain Net show that the proposed method outperforms the state-of-the-art methods. and 4 Experiments In this section, we will verify the effectiveness of the proposed method with three popular datasets in domain adaptation, including Office-31, Office-Home and Domain Net
Researcher Affiliation Collaboration Zengmao Wang1 , Chaoyang Zhou1 , Bo Du1 and Fengxiang He2 1National Engineering Research Center for Multimedia Software, School of Computer Science, Institute of Artificial Intelligence, Science and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China 2JD Explore Academy, JD.com Inc, China
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes In this section, we will verify the effectiveness of the proposed method with three popular datasets in domain adaptation, including Office-31, Office-Home and Domain Net[Zhu et al., 2019; Li et al., 2021].
Dataset Splits No The paper defines source and target domains (e.g., 'For each dataset, one domain in it is treated as target domain while the other domains are treated as the source domains'), but it does not explicitly provide specific training, validation, or test dataset splits with percentages, counts, or references to predefined splits for reproducibility within those domains.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper mentions general settings like 'Resnet-50 is adopted' and 'optimizer and the learning schedule are set same with [Zhu et al., 2019]', but does not provide specific software version numbers for reproducibility.
Experiment Setup Yes We set the learning rate as 0.001 while the optimizer and the learning schedule are set same with [Zhu et al., 2019]. Meanwhile, there are two trade-off parameters λ and β. We set β as 0.01 in all experiments and set λ as 0.1 in Office31 while set it as 1 in Office-Home and Domain Net for practical application. Meanwhile, at the initialization, the deep network is pre-trained without domain alignment at the first some iterations,which is 2000 for Office31, 1000 for Office-Home and 10000 for Domain Net.