Projection onto Minkowski Sums with Application to Constrained Learning

Authors: Joong-Ho Won, Jason Xu, Kenneth Lange

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate empirical advantages in runtime and accuracy over competitors in applications to ℓ1,p-regularized learning, constrained lasso, and overlapping group lasso.Results of the simulation study are summarized in Figure 1.
Researcher Affiliation Academia 1Department of Statistics, Seoul National University 2Department of Statistical Science, Duke University 3University of California, Los Angeles.
Pseudocode Yes Algorithm 1 Projection onto a Minkowski sum of sets
Open Source Code No The paper states 'We compare a MATLAB implementation of our algorithm' but does not provide specific access information or a link to the source code for their own method.
Open Datasets No The paper describes experiments using 'randomly generated inputs x' for the ℓ1,p-overlapping group lasso and 'randomly sampled A and noisy response b' for the constrained lasso, without referencing any publicly available datasets, links, or formal citations.
Dataset Splits No The paper describes using randomly generated inputs and sampled matrices/responses for its experiments, which does not involve traditional fixed training, validation, and test dataset splits.
Hardware Specification Yes The simulation was run on a Linux machine with two Intel Xeon E5-2650v4 (2.20GHz) CPUs.The simulation was run on a Linux machine with two Intel Xeon E5-2680v2 (2.80GHz) CPUs with 256GB memory.
Software Dependencies No The paper mentions using 'MATLAB,' 'SLEP (Liu et al., 2011),' and 'Gurobi (Gurobi Optimization, LLC, 2018)' but does not provide specific version numbers for these software components.
Experiment Setup Yes For each combination of d = 103, 104, 105, 106 and g = 10, 20, 50, 100, proximal maps were computed using both methods for 50 randomly generated inputs x; λ = 2.1 was used.Four sparsity levels were tried: λ/λmax = 0.2, 0.4, 0.6, 0.8, where λmax is the maximal sparsity level found by solving a linear program via Gurobi (Gaines et al., 2018, Sect. 3).