Complete Dictionary Recovery Using Nonconvex Optimization

Authors: Ju Sun, Qing Qu, John Wright

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with synthetic data corroborate our theory. and 6. Numerical Results To corroborate our theory, we experiment with dictionary recovery on simulated data
Researcher Affiliation Academia Ju Sun JS4038@COLUMBIA.EDU Qing Qu QQ2105@COLUMBIA.EDU John Wright JW2966@COLUMBIA.EDU Department of Electrical Engineering, Columbia University, New York, NY, USA
Pseudocode Yes Algorithm 1 Trust Region Method for Finding a Single Sparse Vector
Open Source Code Yes The code is available online: http://github.com/ sunju/dl_focm
Open Datasets No The paper uses simulated data: 'We fix p = 5n2 log(n), and each column of the coefficient matrix X0 Rn p has exactly k nonzero entries, chosen uniformly random from [n] k . These nonzero entries are i.i.d. standard normals.' No public dataset is used or linked.
Dataset Splits No The paper describes generation of synthetic data but does not mention explicit training, validation, or test dataset splits for model reproduction. The experiments involve repeating simulations for different parameters (k, n).
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No The paper mentions implementing Algorithm 1 but does not list any specific software dependencies with version numbers.
Experiment Setup Yes For the sparsity surrogate defined in (2.3), we fix the parameter µ = 10 2.