NOODL: Provable Online Dictionary Learning and Sparse Coding

Authors: Sirisha Rambhatla, Xingguo Li, Jarvis Haupt

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques. We now analyze the convergence properties and sample complexity of NOODL via experimental evaluations.
Researcher Affiliation Academia Dept. of Electrical and Computer Engineering, University of Minnesota Twin Cities, USA Dept. of Computer Science, Princeton University, Princeton, NJ, USA
Pseudocode Yes Algorithm 1: NOODL: Neurally plausible alternating Optimization-based Online Dictionary Learning.
Open Source Code Yes The associated code is made available at https://github.com/srambhatla/NOODL.
Open Datasets No The paper uses synthetically generated data: 'We generate a (n = 1000) (m = 1500) matrix, with entries drawn from N(0, 1), and normalize its columns to form the ground-truth dictionary A .' It does not provide concrete access information for a publicly available or open dataset.
Dataset Splits No The paper describes a data generation setup for synthetic data but does not specify explicit train/validation/test splits, which is common for online learning algorithms where data is streamed.
Hardware Specification Yes The results shown here were compiled using 5 cores and 200GB RAM of Intel Xeon E5 2670 Sandy Bridge and Haswell E5-2680v3 processors.
Software Dependencies No The paper mentions using FISTA (Beck and Teboulle, 2009) and Lasso (Tibshirani, 1996) as third-party methods. However, it does not provide specific version numbers for these or any other software dependencies crucial for reproduction.
Experiment Setup Yes Here, for all experiments we set ηx = 0.2 and τ = 0.1. We terminate NOODL when the error in dictionary is less than 10 10. Also, for coefficient update, we terminate when change in the iterates is below 10 12.