GAP Safe Screening Rules for Sparse-Group Lasso
Authors: Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we present our experiments and illustrate the numerical benefit of screening rules for the Sparse-Group Lasso. |
| Researcher Affiliation | Academia | Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon LTCI, CNRS, Télécom Paris Tech Université Paris-Saclay 75013 Paris, France first.last@telecom-paristech.fr |
| Pseudocode | Yes | Algorithm 1 Computation of Λpx, α, Rq. |
| Open Source Code | Yes | The source code can be found in https://github.com/Eugene Ndiaye/GAPSAFE_SGL. |
| Open Datasets | Yes | Real dataset: NCEP/NCAR Reanalysis 1 [14] |
| Dataset Splits | Yes | We choose τ in the set t0, 0.1, . . . , 0.9, 1u by splitting in 50% the observations and run a training-test validation procedure. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) were provided for the experimental setup. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) were provided. Mentions include 'R glmnet package' and the 'ISTA-BC algorithm' without versions. |
| Experiment Setup | Yes | By default, we choose δ 3 and T 100, following the standard practice when running crossvalidation using sparse models (see R glmnet package [11]). The weights are always chosen as wg ?ng (as in [17]). The expensive computation of the dual gap is not performed at each pass over the data, but only every f ce pass (in practice f ce 10 in all our experiments). |