Robust Dictionary Learning with Capped l1-Norm

Authors: Wenhao Jiang, Feiping Nie, Heng Huang

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provided theoretical analysis and carried out extensive experiments on real word datasets and synthetic datasets to show the effectiveness of our method.
Researcher Affiliation Academia University of Texas at Arlington
Pseudocode Yes Algorithm 1 Robust dictionary learning with capped ℓ1-norm Algorithm 2 Weighted dictionary learning Algorithm 3 Dictionary update
Open Source Code No The paper does not provide any explicit statements or links for open-source code.
Open Datasets Yes extended Yale B dataset [Georghiades et al., 2001] and AR face dataset [Martinez, 1998].
Dataset Splits Yes We split the database randomly into two halves. One half which contains about 32 images for each person was used for training the dictionary. The other half was used for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not list specific software components with version numbers.
Experiment Setup Yes In this experiment, we set the fraction of outliers as 0.05 and λ = 0.1 empirically. The dictionary size is 570 for all methods, which means 15 items for person on average.