Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Alternating minimization for dictionary learning with random initialization
Authors: Niladri Chatterji, Peter L. Bartlett
NeurIPS 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We present theoretical guarantees for an alternating minimization algorithm for the dictionary learning/sparse coding problem. Our guarantees are under a reasonable generative model that allows for dictionaries with growing operator norms, and can handle an arbitrary level of overcompleteness, while having sparsity that is information theoretically optimal. Our main contribution is to present new conditions under which alternating minimization for dictionary learning converges at a linear rate to the optimum. |
| Researcher Affiliation | Academia | Niladri S. Chatterji University of California, Berkeley EMAIL; Peter L. Bartlett University of California, Berkeley EMAIL |
| Pseudocode | Yes | Algorithm 1: Alternating Minimization for Dictionary Learning |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that code for the described methodology is available. |
| Open Datasets | No | The paper relies on a generative model for samples ( |
| Dataset Splits | No | The paper is theoretical and does not conduct empirical experiments with datasets. Therefore, it does not provide information about dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not conduct empirical experiments. Therefore, it does not specify any hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not conduct empirical experiments. Therefore, it does not list specific software dependencies with version numbers. |
| Experiment Setup | No | The paper specifies parameters and their valid ranges for the theoretical algorithm (e.g., step size η, thresholds, tuning parameters λ, ν, γ), which are essential for its theoretical analysis and convergence guarantees. However, these are conditions for the theoretical model and are not described as concrete 'hyperparameter values' or 'training configurations' for an empirical experimental setup. The paper focuses on theoretical proofs rather than empirical implementation details. |