Time--Data Tradeoffs by Aggressive Smoothing

Authors: John J Bruer, Joel A Tropp, Volkan Cevher, Stephen Becker

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 3 shows the results of a numerical experiment that compares the performance difference between current numerical practice and our aggressive smoothing approach.
Researcher Affiliation Academia John J. Bruer1,* Joel A. Tropp1 Volkan Cevher2 Stephen R. Becker3 1Dept. of Computing + Mathematical Sciences, California Institute of Technology 2Laboratory for Information and Inference Systems, EPFL 3Dept. of Applied Mathematics, University of Colorado at Boulder *jbruer@cms.caltech.edu
Pseudocode Yes Algorithm 3.1 Auslender Teboulle applied to the dual-smoothed RLIP
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets No In the experiment, we fix both the ambient dimension d = 40 000 and the normalized sparsity ρ = 5%. To test each smoothing approach, we generate and solve 10 random sparse vector recovery models for each value of the sample size m = 12 000,14 000,16 000,...,38 000. Each random model comprises a Gaussian measurement matrix A and a random sparse vector x whose nonzero entires are 1 with equal probability.
Dataset Splits No The paper describes generating random models for various sample sizes, but does not provide specific training/test/validation dataset splits from a pre-existing dataset.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes In the experiment, we fix both the ambient dimension d = 40 000 and the normalized sparsity ρ = 5%. We stop Algorithm 3.1 when the relative error x xk / x is less than 10 3. For the constant smoothing case, we choose µ = 0.1 based on the recommendation in [15]. We set the smoothing parameter µ = µ(m)/4.