Enhancing Parameter-Free Frank Wolfe with an Extra Subproblem

Authors: Bingcong Li, Lingda Wang, Georgios B. Giannakis, Zhizhen Zhao8324-8331

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical tests on binary classification with different sparsity-promoting constraints demonstrate that the empirical performance of Extra FW is significantly better than FW, and even faster than Nesterov s accelerated gradient on certain datasets. For matrix completion, Extra FW enjoys smaller optimality gap, and lower rank than FW. and 4 Numerical Tests This section deals with numerical tests of Extra FW to showcase its effectiveness on different machine learning problems.
Researcher Affiliation Academia Bingcong Li,1 Lingda Wang,2 Georgios B. Giannakis,1 Zhizhen Zhao2 1University of Minnesota Twin Cities 2University of Illinois at Urbana-Champaign {lixx5599, georgios}@umn.edu, {lingdaw2, zhizhenz}@illinois.edu
Pseudocode Yes Algorithm 1 FW (Frank and Wolfe 1956), Algorithm 2 AFW (Li et al. 2020), Algorithm 3 Extra FW
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. The provided arXiv link is for the paper itself, not source code.
Open Datasets Yes Datasets mnist2 and those from LIBSVM3 are used in the numerical tests. ... 2http://yann.lecun.com/exdb/mnist/ 3https://www.csie.ntu.edu.tw/~cjlin/libsvm/ and We test Extra FW on a widely used dataset, Movie Lens100K4. ... 4https://grouplens.org/datasets/movielens/100k/
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning, specifically for training, validation, and testing sets. While it mentions datasets, it lacks explicit split details.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes In the simulation, R is tuned to obtain a solution that is almost as sparse as the dataset itself.