Scalable Block-Diagonal Locality-Constrained Projective Dictionary Learning

Authors: Zhao Zhang, Weiming Jiang, Zheng Zhang, Sheng Li, Guangcan Liu, Jie Qin

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experimental Results and Analysis
Researcher Affiliation Academia Zhao Zhang1, Weiming Jiang2, Zheng Zhang3, Sheng Li4, Guangcan Liu5 and Jie Qin6 1 School of Computer Science & School of Artificial Intelligence, Hefei University of Technology, China 2 School of Computer Science and Technology, Soochow University, China 3 School of Information Technology and Electrical Engineering, University of Queensland, Australia 4 Department of Computer Science, University of Georgia, USA 5 School of Information and Control, Nanjing University of Information Science and Technology, China 6 Computer Vision Laboratory, ETH Zurich, Switzerland
Pseudocode Yes Algorithm 1 Scalable Locality-Constrained Projective DL
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes MIT CBCL Face Database [Weyrauch et al., 2004], AR Face Database [Martinez and Benavente, 1998], Caltech101 Fatabase[Perona et al., 2004], Caltech256 Database [Griffin et al., 2007], Yale face database (at http://vision.csd.edu/content/yale-face-database)
Dataset Splits No The paper describes training and testing splits (e.g., 'randomly select 4 images per person for training, while test on the rest.') but does not explicitly mention a distinct validation set or its split.
Hardware Specification Yes We perform all the simulations on a PC with Intel (R) Core (TM) i3-4130 CPU @ 3.4 GHz 8G.
Software Dependencies No The paper mentions using PCA and LDA for feature reduction but does not specify any software libraries or their version numbers used for implementation.
Experiment Setup Yes Parameters =0.01, τ =0.01 α and =0.1 β is used in our LC-PDL. Parameters =0.01, τ =0.1 α and =0.1 β are set for LC-PDL. LC-PDL works well in a wide range of parameters α and β , which means our LC-PDL model is insensitive to the parameters α and β by delivering stable performances. It is also noted that that a larger τ than 10-2 tend to decrease the recognition result, i.e., a small τ can be used.