High-Level Semantic Feature Matters Few-Shot Unsupervised Domain Adaptation

Authors: Lei Yu, Wanqi Yang, Shengqi Huang, Lei Wang, Ming Yang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on Domain Net show that the proposed method significantly outperforms SOTA methods in FS-UDA by a large margin (i.e., 10%).
Researcher Affiliation Academia 1School of Computer and Electronic Information, Nanjing Normal University, China 2School of Computing and Information Technology, University of Wollongong, Australia
Pseudocode No The paper describes methodological steps but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository for the described methodology.
Open Datasets Yes We conduct extensive experiments on a multi-domain benchmark dataset Domain Net to demonstrate the efficacy of our method. It was released in 2019 for the research of multi-source domain adaptation (Peng et al. 2019).
Dataset Splits Yes The dataset is split into 217, 43 and 48 categories for episodic training, model validation and testing new tasks, respectively.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes In cross-domain self-training module, we set the threshold 1.7 of similarity score to select the confidence samples in target domain. The margin m in Eq. (3) is empirically set to 1.5. ... The hyperparameters λsfa, λspa and λclm are set to 0.1, 0.05 and 0.01, by grid search, respectively. Also, we employ Adam optimizer with an initial learning rate of 10-4, and meanwhile reduce the learning rate by half every 280 episodes.