Self-Concordant Analysis of Frank-Wolfe Algorithms

Authors: Pavel Dvurechensky, Petr Ostroukhov, Kamil Safin, Shimrit Shtern, Mathias Staudigl

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the numerical experiments we tested the performance of Variant 1 (V1) and Variant 2 (V2) of Algorithm 2, and compared them with the performance of Frank-Wolfe with standard step-size of 2 k+2 (stand.), and step-size determined by exact line-search (Line-S.). As a further benchmark, the self-concordant Proximal-Newton (PN) of Tran-Dinh et al. (2015), as implemented in the SCOPT package2, is included.
Researcher Affiliation Academia 1Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany. 2National Research University Higher School of Economics, Moscow, Russia 3Institute for Information Transmission Problems RAS, Moscow, Russia 4Moscow Institute of Physics and Technology, Dolgoprudny, Russia 5Grand Fellow, Technion Israel Institute of Technology, Haifa, Israel. 6Department of Data Science and Knowledge Engineering, Maastricht University, The Netherlands,.
Pseudocode Yes Algorithm 1 Standard Frank-Wolfe method; Algorithm 2 Adaptive Frank-Wolfe method for SC functions; Algorithm 3 Function step(f, v, x, g, L); Algorithm 4 LLOO-based convex optimization
Open Source Code Yes The codes are accessible via https://github.com/kamil-safin/SCFW.
Open Datasets Yes For the Poisson inverse problem we used the datasets a1aa9a from the LIBSVM library (Chang & Lin, 2011).
Dataset Splits No The paper mentions using specific datasets but does not provide explicit details about training/validation/test splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification Yes The experiments were conducted on a PC with Intel Core i5-7500 3.4GHzs, with a total of 16GB RAM.
Software Dependencies Yes All codes are written in Python 3, with packages for scientific computing Num Py 1.18.1 and Sci Py 1.4.1.
Experiment Setup No The paper states termination criteria for algorithms (e.g., 'terminated after 50,000 iterations', 'optimality gap lower than 1e 10') and mentions hyperparameters for the algorithms (β, ε, γu, γd) but does not provide the specific values used for these hyperparameters in their experiments. It only references 'practically efficient values' from another paper for some parameters, not explicitly stating they used them.