Robust Non-Negative Dictionary Learning

Authors: Qihe Pan, Deguang Kong, Chris Ding, Bin Luo

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment In this section, we empirically evaluate the proposed approach, where our goal is to examine the convergence of the proposed algorithm, and also compare against other robust dictionary learning methods in noisy environment. We do experiment on 5 data sets in our experiments, including two face datasets AT&T 1 and Umist, Yale B, one digit datasets mnist (Lecun et al. 1998) and one image scene datasets Caltech101 (Dueck and Frey 2007).
Researcher Affiliation Academia 1Beihang University, China; 2University of Texas, Arlington, U.S.A; 3Anhui University, China
Pseudocode No The paper describes the updating rules for the algorithm using equations (6) and (7) but does not provide structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide any specific links, explicit statements, or references to supplementary materials for open-source code related to the described methodology.
Open Datasets Yes We do experiment on 5 data sets in our experiments, including two face datasets AT&T 1 and Umist, Yale B, one digit datasets mnist (Lecun et al. 1998) and one image scene datasets Caltech101 (Dueck and Frey 2007). 1http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
Dataset Splits No The paper describes the use of 10% and 20% labeled data for semi-supervised learning experiments, but it does not specify the train, validation, or test dataset splits for the main dictionary learning evaluation.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks) used in the experiments.
Experiment Setup Yes Experiment Settings In all above methods and our method, β = 0.1 if there is β. is searched in the following set: {0, 0.5, 1, , 4.5, 5} if there is . We use results G 2 {0, 1}k n computed from standard k-means clustering, to initialize Y = G + 0.3, and then dictionary A 2 <p k is computed from the centroid of each category. 1 is set to machine precision in our experiments.