Unsupervised Domain Adaptation via Structured Prediction Based Selective Pseudo-Labeling

Authors: Qian Wang, Toby Breckon6243-6250

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on four datasets (i.e. Office-Caltech, Office31, Image CLEF-DA and Office-Home) validate our approach outperforms contemporary state-of-the-art methods.
Researcher Affiliation Academia 1Department of Computer Science, Durham University, United Kingdom 2Department of Engineering, Durham University, United Kingdom
Pseudocode Yes Algorithm 1 Unsupervised Domain Adaptation Using Selective Pseudo-Labeling
Open Source Code Yes Code is available: https://github.com/hellowangqian/domainadaptation-capls
Open Datasets Yes Our experiments on four commonly used domain adaptation datasets (i.e. Office+Caltech (Gong et al. 2012), Office31 (Saenko et al. 2010), Image CLEF-DA (Caputo et al. 2014) and Office-Home (Venkateswara et al. 2017)).
Dataset Splits No The paper defines the source domain as labeled data and the target domain as unlabeled data to be classified in the context of Unsupervised Domain Adaptation. While it describes iterative pseudo-labeling on the target data, it does not specify explicit training/validation/test splits in the traditional supervised learning sense with percentages or sample counts for a distinct validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The algorithm is implemented in Matlab. No specific version number for Matlab or any other libraries/packages is provided.
Experiment Setup Yes To trade off, we set the values of d1 based on the number of classes in the dataset which results in d1 = 128, 512, 128 and 1024 for Office-Caltech, Office-31, Image CLEF-DA and Office-Home respectively. For the dimensionality of the space learned by SLPP, we set d2 = 128 uniformly for all datasets. The number of iterations T is set to 10 in all experiments unless otherwise specified.