Asynchronous Stochastic Frank-Wolfe Algorithms for Non-Convex Optimization

Authors: Bin Gu, Wenhan Xian, Heng Huang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on real high-dimensional gray-scale images not only confirm the fast convergence of our algorithms, but also show a near-linear speedup on a parallel system with shared memory due to the lock-free implementation.
Researcher Affiliation Collaboration Bin Gu1 , Wenhan Xian2 and Heng Huang2,1 1JD Finance America Corporation 2Department of Electrical & Computer Engineering, University of Pittsburgh, USA
Pseudocode Yes Algorithm 1 Asynchronous Stochastic Frank-Wolfe Algorithm (Asy SFW) and Algorithm 2 Asy SVFW Algorithm
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the source code for the work described, nor does it provide a direct link to a code repository.
Open Datasets Yes The real gray-scale images are available at https://homepages. cae.wisc.edu/ ece533/images/
Dataset Splits No The paper describes missing 30% of pixels for the robust matrix completion problem, which is data corruption for the task itself, but does not provide explicit training, validation, and test dataset splits for model evaluation.
Hardware Specification Yes Our experiments are performed on a 32-core two-socket Intel Xeon E5-2699 machine where each socket has 16 cores.
Software Dependencies No The paper states implementation using 'C++' and 'Open MP' but does not provide specific version numbers for these software components.
Experiment Setup Yes Thus, the parameters σ and c are fixed at 0.15 and 500 respectively. In addition, we set the learning rate γ = 0.0001, the mini-batch size b = 500, the inner loop size of Asy SVFW m = 50. We choose X = 1 2 αY as the initial solution for Asy SFW and Asy SVFW, where α is the smallest value in {1, 2, . . . , 10} such that X c.