Heterogeneous Transfer Learning via Deep Matrix Completion with Adversarial Kernel Embedding

Authors: Haoliang Li, Sinno Jialin Pan, Renjie Wan, Alex C. Kot8602-8609

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on two different vision tasks to demonstrate the effectiveness of our proposed method compared with a number of baseline methods.
Researcher Affiliation Academia 1Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University, Singapore 2School of Computer Science and Engineering, Nanyang Technological University, Singapore
Pseudocode Yes Algorithm 1 Deep-MCA
Open Source Code No The paper does not provide any statement or link regarding the public availability of its source code.
Open Datasets Yes We follow the setting (Tsai, Yeh, and Frank Wang 2016; Yan et al. 2018) by using images collected from Amazon dataset (A), DSLR dataset (D), webcam dataset (W) and Caltech-256 dataset (C), where ten common categories in all these datasets are used for conduct experiments. We apply NUS-WIDE (Chua et al. 2009) and Image Net (Deng et al. 2009) as the datasets for text-to-image classification task.
Dataset Splits No The paper mentions training data selection and remaining instances for testing, but does not explicitly specify a validation set or a clear train/validation/test split percentages/counts.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions 'Caffe' and optimization algorithms like 'ADAM' and 'GAN' but does not specify any software dependencies with version numbers.
Experiment Setup Yes The learning rate of our algorithm is set as 0.0001 for all experiments. Regarding the parameter setting for objective, one can use a tuning strategy by training on source domain and testing on labeled target domain. In our experiment, we fix the parameters for all experiments for simplicity. In particular, we set λ = 0.001, ζ = 10 and all others as 1. We set the dimension of hidden layer as 100 for fair comparison with other baseline methods.