Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization
Authors: Benjamin Dubois-Taine, Francis Bach, Quentin Berthet, Adrien Taylor
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide theoretical convergence rates and also present numerical results to demonstrate the performance of our proposed algorithms. The numerical experiments include different variants of our proposed algorithms and compare them with existing ones on both synthetic and real-world datasets. |
| Researcher Affiliation | Academia | Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shenzhen, P.R. China |
| Pseudocode | Yes | Algorithm 1: FSMC Algorithm 2: Accelerate Frank-Wolfe Algorithm (AFW) for SCM |
| Open Source Code | No | No explicit statement or link providing access to the source code for the methodology described in this paper. |
| Open Datasets | Yes | We use the a9a dataset and the w8a dataset which are commonly used benchmark datasets. |
| Dataset Splits | Yes | We randomly choose 80% of the training data from both a9a and w8a datasets as the training set and the remaining 20% as the test set. |
| Hardware Specification | Yes | All numerical experiments are performed on a desktop PC with an Intel Core i7-3210M CPU, 2.50GHz and 8GB RAM. |
| Software Dependencies | No | The algorithms are implemented in Python 3.8. This only provides a programming language version, not specific library or solver versions as required. |
| Experiment Setup | Yes | We choose the initial step size α0 = 10−1 and the parameter γ = 100 for Algorithm 1 and Algorithm 2 respectively. The batch size for SGD and FSMC-VR are set to 500 in this experiment. The initial point is set to be x0 = 0. |