Discriminative Analysis Dictionary Learning

Authors: Jun Guo, Yanqing Guo, Xiangwei Kong, Man Zhang, Ran He

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several commonly used databases show that our proposed method not only significantly improves the discriminative ability of ADL, but also outperforms state-of-the-art synthesis DL methods.
Researcher Affiliation Academia 1 School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China 2 The Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China 3 CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100190, China
Pseudocode Yes Algorithm 1 Discriminative Analysis Dictionary Learning
Open Source Code No The paper does not provide any links or explicit statements about releasing source code for the proposed DADL method.
Open Datasets Yes We use the features of these databases provided by Jiang2 and Corso3. 2http://www.umiacs.umd.edu/ zhuolin/projectlcksvd.html. 3http://www.cse.buffalo.edu/ jcorso/r/actionbank.
Dataset Splits No While cross-validation is mentioned for tuning parameters, a distinct "validation dataset split" is not explicitly provided, only training and testing splits are detailed for the final evaluation.
Hardware Specification Yes Our experiments are run via MATLAB R2013a on a desktop PC with an Intel Core i7-3770 processor at 3.40 GHz and 16.00 GB RAM.
Software Dependencies Yes Our experiments are run via MATLAB R2013a on a desktop PC with an Intel Core i7-3770 processor at 3.40 GHz and 16.00 GB RAM.
Experiment Setup Yes We set the Gaussian kernel parameter σ = 10 and the balance weight λ1 = 10 in all our experiments. The experimental results are insensitive to σ [7, 13] and λ1 [10, 15]. The other major parameters (k, λ2, λ3) on each database have been tuned by cross validation. The best (k, λ2, λ3) for each database are listed in Table 1.