Multi-View Correlated Feature Learning by Uncovering Shared Component

Authors: Xiaowei Xue, Feiping Nie, Sen Wang, Xiaojun Chang, Bela Stantic, Min Yao

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on several benchmark datasets. The results demonstrate that our proposed algorithm performs better than all the compared multi-view learning algorithms. In this section, systematical experiments have been conducted to evaluate the performance of the proposed MVCS.
Researcher Affiliation Academia 1College of Computer Science, Zhejiang University, P.R. China 2School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi an 710072, Shaanxi, P. R. China 3School of Information and Communication Technology, Griffith University, Australia 4Centre for Quantum Computation and Intelligent Systems (QCIS), University of Technology Sydney, Australia
Pseudocode Yes Algorithm 1 Multi-view Correlated feature Learning
Open Source Code No The paper mentions 'implement the compared MKL methods using the codes published by (Yu et al. 2010; Kloft et al. 2011)' and 'LIBSVM 2 software package is used to implement SVM in all our experiments.' These refer to code for *compared* methods, not the authors' own MVCS algorithm. There is no explicit statement about releasing the source code for the MVCS algorithm or a link to its repository.
Open Datasets Yes NUS-WIDE-OBJECT: NUS-WIDE-OBJECT dataset (Chua et al. 2009) is used to compare different multi-view algorithms in terms of object categorization. OUTDOOR SCENE: The outdoor scene dataset (Monadjemi, Thomas, and Mirmehdi 2002) contains 2,688 color images that belong to 8 outdoor scene categories. MSRC-V1: This dataset is a scene recognition data set consisting of 240 images and 8 classes in total. Following the setting in (Grauman and Darrell 2006), we select 7 classes and each class has 30 images. Handwritten Digit: Handwritten Digit dataset contains 0 to 9 ten digit classes and 2,000 data points in total.
Dataset Splits Yes In all the experiments, we apply standard 5-fold cross-validation and report the average results with standard deviation.
Hardware Specification No No specific hardware details (e.g., CPU or GPU models, memory, or cloud instance types) are provided for the experimental setup.
Software Dependencies No The paper mentions 'implement the compared MKL methods using the codes published by (Yu et al. 2010; Kloft et al. 2011).' and 'LIBSVM 2 software package is used to implement SVM in all our experiments.' These references cite software or code for comparison methods, but they do not provide specific version numbers for the software dependencies themselves (e.g., 'LIBSVM version X.Y' or 'Python 3.Z').
Experiment Setup Yes The parameter of our method (α in Eq.(2)) is optimized in the range of {10 6, 10 4, ..., 104, 106}. For SVM method and MKL methods, one Gaussian kernel is constructed for each for each type of features (i.e., K(xi, xj) = exp γ xi xj 2 ), where the parameter γ is the fine tuned in the same range used as our method. And the sum of the dimensionalities of the Z and Zi, namely d+ds, is equal to the number of the class c and we optimize the ds in the range of {0, 1, 2, 3}.