Deep Low-Rank Coding for Transfer Learning

Authors: Zhengming Ding, Ming Shao, Yun Fu

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on several benchmarks have demonstrated the effectiveness of our proposed algorithm on facilitating the recognition performance for the target domain. In this section, we evaluate our proposed method on several benchmarks. We will first introduce the datasets and experimental setting. Then comparison results will be presented followed by some properties analysis and discussion.
Researcher Affiliation Academia Zhengming Ding1, Ming Shao1 and Yun Fu1,2 Department of Electrical & Computer Engineering1, College of Computer & Information Science2, Northeastern University, Boston, MA, USA {allanding,mingshao,yunfu}@ece.neu.edu
Pseudocode Yes Algorithm 1: Solving Problem (3) by ALM; Algorithm 2: Algorithm of Deep Low-Rank Coding (DLRC)
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology or a link to a code repository.
Open Datasets Yes MRSC+VOC includes two datasets: (1) MSRC dataset1 is provided by Microsoft Research Cambridge, which contains 4,323 images labeled by 18 classes; (2) VOC2007 dataset2 contains 5,011 images annotated with 20 concepts. USPS+MNIST3 includes 10 common classes of digits from two datasets: (1) USPS dataset consists of 7,291 training images and 2,007 test images; (2) MNIST dataset has a training set of 60,000 examples and a test set of 10,000 examples. Reuters-2157824 is a difficult text dataset with many top and subcategories. Office+Caltech-2565 select 10 common categories from Office dataset and Caltech-256. (Footnotes 1-5 provide URLs for these datasets)
Dataset Splits No The paper mentions 'USPS dataset consists of 7,291 training images and 2,007 test images' and 'MNIST dataset has a training set of 60,000 examples and a test set of 10,000 examples,' providing train and test set sizes. However, it does not explicitly provide details for a validation set split or other comprehensive dataset splitting information to reproduce the experimental setup.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, or solvers with their respective versions) used for implementing or running the experiments.
Experiment Setup Yes Initialize: W0 = Z0 = J0 = Y1,0 = Y2,0 = 0, µ0 = 10 6, µmax = 106, ρ = 1.1, ε = 10 6, t = 0.; In the experiments, we employ five-layer features and combine them together to evaluate the final performance of our DLRC.; s(W kxi, W kxj) = exp( W kxi W kxj 2/2σ2) is Gaussian kernel function with σ as bandwidth (we set σ = 1 in our experiment).; In the experiments, we usually choose α = 10 and λ = 1.; In the experiments, we first employ the nearest neighbour classifier to predict the labels of target data using source data. Then, we label 50% target samples, which are most closest to the labeled source data according to the Euclidean distances.