A Self-Representation Induced Classifier
Authors: Pengfei Zhu, Lei Zhang, Wangmeng Zuo, Xiangchu Feng, Qinghua Hu
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on different pattern recognition tasks show that DSRIC achieves comparable or superior recognition rate to state-of-the-art representation based classifiers, however, it is much more efficient and needs much less storage space. 4 Experimental analysis In this section, we test the performance of DSRIC on eight UCI datasets, two handwritten digit recognition databases, two face recognition database and one gender classification dataset. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Tianjin University, Tianjin, China 2Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China 3School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China 4School of Mathematics and Statistics, Xidian University, Xian, China |
| Pseudocode | Yes | Algorithm 1 The algorithm of discriminative selfrepresentation induced classifier (DSRIC) |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | In this section, we test the performance of DSRIC on eight UCI datasets, two handwritten digit recognition databases, two face recognition database and one gender classification dataset. ... USPS The USPS dataset contains 7,291 training and 2,007 testing images. ... MNIST The MNIST dataset includes a training set of 60,000 samples and a test set of 10,000 samples. ... LFW database The LFW database contains images of 5,749 subjects in unconstrained environment. ... AR dataset is used. |
| Dataset Splits | Yes | There are two parameters in DSRIC: λ1 and λ2. In all the experiments, λ2 is fixed as 0.001 and λ1 is chosen on the training dataset by five-fold cross-validation. |
| Hardware Specification | Yes | All algorithms are run in an Intel(R) Core(TM) i7-2600K (3.4GHz) PC. |
| Software Dependencies | No | The paper mentions general tools but does not specify any software dependencies with version numbers (e.g., specific Python library versions or solver versions). |
| Experiment Setup | Yes | There are two parameters in DSRIC: λ1 and λ2. In all the experiments, λ2 is fixed as 0.001 and λ1 is chosen on the training dataset by five-fold cross-validation. For the compared representation based methods, the parameters in NCH and NAH are set as 1 and 100, respectively, as suggested in the original paper; the regularization parameter in NSC, SRC and CRC is tuned from {0.0005, 0.001, 0.005, 0.01} and the best results are reported; following the experiment setting in [Chi and Porikli, 2014], the parameter of CROC is chosen by five-fold cross-validation on the training set. |