Disjoint Label Space Transfer Learning with Common Factorised Space

Authors: Xiaobin Chang, Yongxin Yang, Tao Xiang, Timothy M. Hospedales3288-3295

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments The proposed model is evaluated on progressively more challenging problems. First, we evaluate CFSM on unsupervised domain adaptation (UDA). Second, different DLSTL settings are considered, including semi-supervised DLSTL classification and unsupervised DLSTL retrieval. CFSM handles all these scenarios with minor modifications. The effectiveness CFSM is demonstrated by its superior performance compared to the existing work. Finally insight is provided through ablation study and visualisation analysis.
Researcher Affiliation Academia 1Queen Mary University of London, 2The University of Edinburgh
Pseudocode No The paper describes the model architecture and optimization process but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes SVHN (Netzer et al. 2011) is the labelled source dataset and MNIST (Le Cun et al. 1998) is the unlabelled target. ... Market (Zheng et al. 2015) and Duke (Zheng, Zheng, and Yang 2017). ... Sketchy dataset (Sangkloy et al. 2016).
Dataset Splits Yes Results are averaged over ten random splits as in (Luo et al. 2017). ... We randomly split 75 classes as a labelled source domain and use the remaining 50 classes to define an unlabelled target domain with disjoint label space.
Hardware Specification No The paper mentions using
Software Dependencies No The paper mentions "Adam optimiser" but does not specify its version or the versions of any other software dependencies.
Experiment Setup Yes We set d C = 50, βM = 0.001 and βC = 0.01. ... We set d C = 10, βM = βC = 0.01. The learning rate is 0.001 and the Adam (Kingma and Ba 2014) optimiser is used. ... We set d C = 2048, βM = 2.0, βC = 0.01. Adam optimiser is used with learning rate 3.5e 4. ... We set d C = 512, βM = 10 3, βC = 0.1. Adam optimiser with learning rate 10 4 is used.