Lazifying Conditional Gradient Algorithms

Authors: Gábor Braun, Sebastian Pokutta, Daniel Zink

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Complementing the theoretical analysis we report computational results demonstrating effectiveness of our approach via a significant reduction in wall-clock running time compared to their linear optimization counterparts. and Computational experiments. We demonstrate computational superiority by extensive comparisons of the weak separation based versions with their original versions. In all cases we report significant speedups in wall-clock time often of several orders of magnitude.
Researcher Affiliation Academia 1ISyE, Georgia Institute of Technology, Atlanta, GA.
Pseudocode Yes Algorithm 1 Frank-Wolfe Algorithm (Frank & Wolfe, 1956), Algorithm 2 Lazy Conditional Gradients (LCG), Algorithm 3 Lazy Pairwise Conditional Gradients (LPCG), Algorithm 4 Parameter-free Lazy Conditional Gradients (LCG).
Open Source Code No No explicit statement providing access to source code for the methodology was found. The paper discusses algorithms and experiments but does not provide a link or an explicit statement about code release.
Open Datasets No The paper mentions problems like 'video colocalization problem' and 'matrix completion instance' but does not provide concrete access information (link, DOI, specific repository, or formal citation with authors and year) to the datasets used. While references like 'MIPLIB 2003' and 'MIPLIB 2010' are present in the bibliography, the main text does not specify these as public datasets used, nor does it provide direct access details for them.
Dataset Splits No No specific dataset split information (percentages, sample counts, or references to predefined splits) was provided in the paper. The paper discusses experiments but does not detail how data was partitioned for training, validation, or testing.
Hardware Specification No No specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running experiments were mentioned. The paper only refers to 'wall-clock time' for performance comparisons.
Software Dependencies Yes Gurobi Optimization. Gurobi optimizer reference manual version 6.5, 2016.
Experiment Setup No The paper discusses algorithmic parameters like 'accuracy K > 1' and 'initial upper bound Φ0', and explores the 'Effect of K' in experiments. However, it does not provide concrete hyperparameter values, specific training configurations (e.g., learning rates, batch sizes, number of epochs for specific models), or system-level settings required to fully reproduce the experimental setup for the results presented.