Revisiting Frank-Wolfe for Polytopes: Strict Complementarity and Sparsity

Authors: Dan Garber

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To further motivate the strict complementarity assumption, in Table 2 we bring empirical evidence for a standard setup of recovering a sparse vector from (random) linear measurements. In particular it is observable that the strict complementarity parameter δ does not change substantially as the dimension d grows.
Researcher Affiliation Academia Dan Garber Department of Industrial Engineering and Management Technion Israel Institute of Technology Haifa, Israel 3200003 dangar@technion.ac.il
Pseudocode Yes Algorithm 1 Frank-Wolfe Algorithm with line-search; Algorithm 2 Frank-Wolfe Algorithm with away-steps and line-search (see also [17, 21])
Open Source Code No The paper does not contain any statements about releasing source code or links to code repositories.
Open Datasets No Table 2 describes a synthetic data generation process for an empirical illustration: "Recovering a random sparse vector in the unit simplex x0 Sd with nnz(x0) = 5 from noisy measurements b = Ax0 + c Ax0 v, where A Rm d has i.i.d. standard Gaussian entries, v is a random unit vector and c = 0.2." This is a description of how data was generated, not a reference to a publicly available dataset with concrete access information.
Dataset Splits No The paper describes a synthetic data generation process and an experiment setup in Table 2, but does not provide explicit training, validation, or test dataset splits (e.g., percentages or counts) or refer to standard predefined splits.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the empirical evaluations (e.g., CPU, GPU, or cloud specifications).
Software Dependencies No The paper does not list any software dependencies with specific version numbers for its experiments.
Experiment Setup Yes For recovery we solve x arg minx τSd Ax b 2, where τ = 0.7 (we need to scale down the unit simplex to avoid fitting the noise). The recovery error is τ 1x x0 2. The results are averaged over 50 i.i.d. runs.