On the Global Linear Convergence of Frank-Wolfe Optimization Variants
Authors: Simon Lacoste-Julien, Martin Jaggi
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the performance of the presented algorithm variants in two numerical experiments, shown in Figure 2. |
| Researcher Affiliation | Academia | Simon Lacoste-Julien INRIA SIERRA project-team Ecole Normale Sup erieure, Paris, France Martin Jaggi Dept. of Computer Science ETH Z urich, Switzerland |
| Pseudocode | Yes | Algorithm 1 Away-steps Frank-Wolfe algorithm: AFW(x(0), A, ϵ) |
| Open Source Code | No | Code is available from the authors website. |
| Open Datasets | Yes | For the LMOA, we re-use the code provided by [16] and their included aeroplane dataset resulting in a QP over 660 variables. |
| Dataset Splits | No | Not found. The paper does not specify exact train/validation/test split percentages, sample counts, or cross-validation setup for the datasets used. |
| Hardware Specification | No | Not found. The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud resources) used for the experiments. |
| Software Dependencies | No | Not found. The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, CPLEX 12.4). |
| Experiment Setup | No | Not found. The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size) or explicit training configurations. |