Implicit Regularization for Optimal Sparse Recovery

Authors: Tomas Vaskevicius, Varun Kanade, Patrick Rebeschini

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our findings with numerical experiments and compare our algorithm against explicit 1 penalization.
Researcher Affiliation Academia 1 Department of Statistics, 2 Department of Computer Science University of Oxford
Pseudocode Yes Algorithm 1. Let α, η > 0 be two given parameters. Let u0 = v0 = α and for all t 0 we let mt = 1. Perform the updates given in (1). Algorithm 2. Let α, τ N and w max ˆz 2w max be three given parameters. Set η = 1 20ˆz and u0 = v0 = α. Perform the updates in (1) with m0 = 1 and mt adaptively defined as follows: 1. Set mt = mt 1. 2. If t = mτ log α 1 for some natural number m 2 then let mt,j = 2mt 1,j for all j such that u2 t,j v2 t,j 2 m 1ˆz.
Open Source Code No The paper does not provide any concrete access information (e.g., a link or explicit statement) for its own source code.
Open Datasets No The paper describes generating synthetic data for its simulations: "For each run the entries of X are sampled as i.i.d. Rademacher random variables and the noise vector ξ follows i.i.d. N(0, σ2) distribution." It does not use or provide access to any publicly available dataset.
Dataset Splits Yes Among the 200 obtained models we choose the one with the smallest error on a validation dataset of size n/4.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, or programming language versions).
Experiment Setup Yes Unless otherwise specified, the default values for simulation parameters are n = 500, d = 104, k = 25, α = 10 12, γ = 1, σ = 1 and for Algorithm 2 we set τ = 10.