Representation Learning with Multiple Lipschitz-Constrained Alignments on Partially-Labeled Cross-Domain Data

Authors: Songlei Jian, Liang Hu, Longbing Cao, Kai Lu4320-4327

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental MULAN shows its superior performance on partially-labeled semisupervised domain adaptation and few-shot domain adaptation and outperforms the state-of-the-art visual domain adaptation models by up to 12.1%. Experiments Experimental Setup Datasets and Evaluation Traditional visual DA datasets, such as MNIST, USPS, and SVHN, have been reported that they are over-evaluated to achieve very high accuracy for almost all recent models (Tzeng et al. 2017; Motiian et al. 2017). Therefore, we adopt the latest Vis DA Challenge dataset (Peng et al. 2017) in our experiments...
Researcher Affiliation Academia Songlei Jian, Liang Hu, Longbing Cao, Kai Lu College of Computer, National University of Defense Technology, China Advanced Analytics Institute, University of Technology Sydney, Australia {jiansonglei, kailu}@nudt.edu.cn, rainmilk@gmail.com, longbing.cao@uts.edu.au
Pseudocode Yes Algorithm 1 The Learning Process of MULAN
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes Therefore, we adopt the latest Vis DA Challenge dataset (Peng et al. 2017) in our experiments, which supports object classification of syntheticand real-object images.
Dataset Splits Yes The classification accuracy (mean std%) of Synthetic Real domain adaptation with 5-fold validation on the Vis DA Challenge dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions the use of 'ResNet50 features'.
Software Dependencies No The paper mentions implementing 1-Lipschitz functions with SNMLPs and using Adam optimizer, and ResNet50 features, but does not provide specific version numbers for any software dependencies like programming languages or libraries.
Experiment Setup Yes We set a small margin mε = 1e 3 in Eqn. 9 to avoid overfitting to the target representation space Ht." and "we set γ = 0.02 in this paper through empirical test" and "All the image features, i.e., Xt and Xs in these methods are represented by Res Net50 features (He et al. 2016) that are pre-trained on Image Net."