Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Complete Dictionary Recovery Using Nonconvex Optimization
Authors: Ju Sun, Qing Qu, John Wright
ICML 2015 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments with synthetic data corroborate our theory. and 6. Numerical Results To corroborate our theory, we experiment with dictionary recovery on simulated data |
| Researcher Affiliation | Academia | Ju Sun EMAIL Qing Qu EMAIL John Wright EMAIL Department of Electrical Engineering, Columbia University, New York, NY, USA |
| Pseudocode | Yes | Algorithm 1 Trust Region Method for Finding a Single Sparse Vector |
| Open Source Code | Yes | The code is available online: http://github.com/ sunju/dl_focm |
| Open Datasets | No | The paper uses simulated data: 'We fix p = 5n2 log(n), and each column of the coefficient matrix X0 Rn p has exactly k nonzero entries, chosen uniformly random from [n] k . These nonzero entries are i.i.d. standard normals.' No public dataset is used or linked. |
| Dataset Splits | No | The paper describes generation of synthetic data but does not mention explicit training, validation, or test dataset splits for model reproduction. The experiments involve repeating simulations for different parameters (k, n). |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions implementing Algorithm 1 but does not list any specific software dependencies with version numbers. |
| Experiment Setup | Yes | For the sparsity surrogate defined in (2.3), we fix the parameter µ = 10 2. |