Locality-Constrained Low-Rank Coding for Image Classification

Authors: Ziheng Jiang, Ping Guo, Lihong Peng

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments, we evaluate LCLR with four benchmarks, including one face recognition dataset (extended Yale B), one handwritten digit recognition dataset (USPS), and two image datasets (Scene13 for scene recognition and Caltech101 for object recognition). Experimental results show that our approach outperforms many stateof-the-art algorithms even with a linear classifier.
Researcher Affiliation Academia School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China School of Computer Science, National University of Defense Technology, Changsha 410073, China
Pseudocode Yes Algorithm 1 online coding by LCLR; Algorithm 2 dictionary learning by LCLR
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is open-source or available.
Open Datasets Yes The Extended Yale B dataset, containing 2,414 frontal face images of 38 people... (Georghiades et al. 2001); The widely used USPS dataset... (http://www.cad.zju.edu.cn/home/dengcai/Data/MLData.html); The Caltech101 dataset (Fei-Fei et al. 2007)...; Scene-13 (Fei-Fei and Perona 2005) contains 3,859 images of 13 classes.
Dataset Splits No The paper provides train/test splits for all datasets (e.g., 'randomly choose 100 images from each class for training and the rest for testing' for Scene-13, and '7,291 for training and 2,007 for testing' for USPS), but it does not explicitly mention a separate validation split or cross-validation for hyperparameter tuning.
Hardware Specification Yes All experiments are carried out using MATLAB on a Intel Core i7-4770K PC with 16GB RAM.
Software Dependencies No The paper mentions 'MATLAB' as the software used but does not provide a specific version number or other software dependencies with their versions.
Experiment Setup Yes The parameter settings are: λ1 = 1, λ2 = 500, σ = 200, µ = 10 2, k = 512. (for Extended Yale B); parameter settings are the same as those for Extended Yale B except that k = 1024 and ρ = 1.1 (for USPS); Max-pooling is adopted... SPM kernel with 3 levels of 1 1, 2 2, and 4 4 is utilized. Linear-SVM (Fan et al. 2008) is chosen for the classification purpose. (for Object and Scene Recognition)