Discriminant Projection Representation-Based Classification for Vision Recognition

Authors: Qingxiang Feng, Yicong Zhou

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on five typical databases show that the proposed PRC and DPRC are effective and outperform other state-of-the-art methods on several vision recognition tasks.
Researcher Affiliation Academia Qingxiang Feng, Yicong Zhou* Computer and Information Science, University of Macau
Pseudocode Yes Algorithm 1: Projection Representation
Open Source Code No No explicit statement or link to open-source code for the described methodology.
Open Datasets Yes LFW-a database (Zhu et al. 2012) is used in this experiment. The well-known 15 scene database contains 4,485 images of 15 scene categories (Lazebnik, Schmid, and Ponce 2006). The Caltech101 dataset (Fei-Fei, Fergus, and Perona 2007) has 9,144 images with 102 classes. The Ucf50 action dataset (Reddy and Shah 2013) has 6,680 action videos with 50 action categories. The Caltech-256 dataset (Griffin, Holub, and Perona 2007) has 30,608 object images of 256 object class
Dataset Splits Yes The experiment set: 5 samples are randomly selected to form the training set, while other 2 samples are exploited for testing. The following experimental protocol is used (Liu and Liu 2015): 100 images per class are randomly chosen for training and the rest images are used for testing. Following the common experimental settings, we train on 5 samples per class and the rest images are used as the testing set. For fair comparison, we follow the ref. (Guo et al. 2016): Divide the database into five folds, use four folds for training and one fold for testing. randomly select 60 images for training, the rest images are used for testing.
Hardware Specification No No specific hardware specifications (like GPU/CPU models or memory) are mentioned for running experiments.
Software Dependencies No No specific software dependencies with version numbers are provided.
Experiment Setup Yes The experiment set: 5 samples are randomly selected to form the training set, while other 2 samples are exploited for testing. The following experimental protocol is used (Liu and Liu 2015): 100 images per class are randomly chosen for training and the rest images are used for testing. we utilize the 3000-dimension spatial pyramid feature provided by (Jiang, Lin, and Davis 2013) to represent the object image. We use PCA (Luo et al. 2016) to reduce the action bank features (Sadanand and Corso 2012) to 5000 dimensions. Set a stop parameter e = 1, if one of the two stop conditions is satisfied, e = 0, the iteration stops. Then the projection representation pc can be described as pc = pc,k i = xc,k i + tk(xc,k i xc,k i ) (11) Set e = 1; J = 100; δ0 = 0.01 Repeat