Lock-Free Optimization for Non-Convex Problems

Authors: Shen-Yi Zhao, Gong-Duo Zhang, Wu-Jun Li

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results also show that both Hogwild! and Asy SVRG are convergent on non-convex problems, which successfully verifies our theoretical results. Experiment To verify our theoretical results about Hogwild! and Asy SVRG, we use a fully-connected neural network to construct a non-convex function. ... We use two datasets: connect-4 and MNIST4 to do experiments...
Researcher Affiliation Academia Shen-Yi Zhao, Gong-Duo Zhang, Wu-Jun Li National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University, China {zhaosy, zhanggd}@lamda.nju.edu.cn, liwujun@nju.edu.cn
Pseudocode Yes Algorithm 1 Hogwild! and Algorithm 2 Asy SVRG are explicitly presented.
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the methodology described in this paper is publicly available.
Open Datasets Yes We use two datasets: connect-4 and MNIST4 to do experiments and λ = 10 3. ... 4https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Dataset Splits No The paper mentions training and testing but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits for training/validation/test, or detailed splitting methodology).
Hardware Specification Yes The experiments are conducted on a server with 12 Intel cores and 64G memory. One possible reason is that we have two CPUs in our server, with 6 cores for each CPU.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We initialize w by randomly sampling from a Gaussian distribution with mean being 0 and variance being 0.01, and initialize b = 0. During training, we use a fixed stepsize for both Hogwild! and Asy SVRG. The stepsize is chosen from {0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001}, and the best is reported. For the iteration number of the inner-loop of Asy SVRG, we set M = n/p, where p is the number of threads.