GAP Safe screening rules for sparse multi-task and multi-class models

Authors: Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper we derive new safe rules for generalized linear models regularized with ℓ1 and ℓ1{ℓ2 norms. ... The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.
Researcher Affiliation Academia LTCI, CNRS, T el ecom Paris Tech, Universit e Paris-Saclay Paris, 75013, France
Pseudocode No In all experiments, the coordinate descent algorithm used follows the pseudo code from [11] with a screening step every 10 iterations.
Open Source Code No The paper mentions that their implementation is based on Scikit-Learn and Lightning software, but does not explicitly state that their specific GAP Safe rule implementation is open-source or provide a link to it.
Open Datasets No The paper mentions 'MEG/EEG brain imaging dataset', 'Leukemia dataset', and 'News20 dataset', but does not provide concrete access information (link, DOI, specific repository, or formal citation with author/year for public access) for the exact datasets used.
Dataset Splits No The paper does not explicitly state specific training/validation/test dataset splits, sample counts, or cross-validation methodologies used to partition the data for reproduction.
Hardware Specification No The paper does not specify any details about the hardware used for running experiments (e.g., CPU/GPU models, memory, or specific computing environments).
Software Dependencies No The paper mentions 'Python and Cython', 'Scikit-Learn [17]', and 'Lightning software [4]' as software used, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes In all experiments, the coordinate descent algorithm used follows the pseudo code from [11] with a screening step every 10 iterations. The experimental setup consists in estimating the solutions of the multi-task Lasso problem for 100 values of λ on a logarithmic grid from λmax to λmax{103.