Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks

Authors: Jie Hu, Vishwaraj Doshi, Do Young Eun

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we simulate our SA-SRRW algorithm on the wiki Vote graph (Leskovec & Krevl, 2014), comprising 889 nodes and 2914 edges. ... Our results are presented in Figures 2 and 3, where each experiment is repeated 100 times.
Researcher Affiliation Collaboration Jie Hu 1, Vishwaraj Doshi 2, Do Young Eun1 1North Carolina State University, 2IQVIA Inc. {jhu29,dyeun}@ncsu.edu, vishwaraj.doshi@iqvia.com
Pseudocode No The paper describes algorithm steps using mathematical equations and textual descriptions (e.g., 'Draw: Xn+1 KXn, [xn] (4a) Update: xn+1 = xn + γn+1(δXn+1 xn), (4b) θn+1 = θn + βn+1H(θn, Xn+1), (4c)') but does not present them in a clearly labeled 'Pseudocode' or 'Algorithm' block format.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes In this section, we simulate our SA-SRRW algorithm on the wiki Vote graph (Leskovec & Krevl, 2014), comprising 889 nodes and 2914 edges. ... we consider the following L2 regularized binary classification problem: ... from the ijcnn1 dataset (with 22 features, i.e., si R22) from LIBSVM (Chang & Lin, 2011), ... Figure 2c on the smaller Dolphins graph (Rossi & Ahmed, 2015) 62 nodes and 159 edges
Dataset Splits No The paper mentions using specific datasets but does not explicitly provide details about training, validation, and test splits (e.g., percentages or counts).
Hardware Specification No The paper does not specify the hardware used for running experiments, such as particular CPU or GPU models, or memory specifications.
Software Dependencies No The paper does not list specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes We configure the SRRW s base Markov chain P as the MHRW with a uniform target distribution µ = 1 N 1. For distributed optimization, we consider the following L2 regularized binary classification problem: ... and penalty parameter κ = 1. ... We fix the step size βn = (n + 1) 0.9 for the SA iterates and adjust γn = (n + 1) a in the SRRW iterates to cover all three cases discussed in this paper: (i) a = 0.8; (ii) a = 0.9; (iii) a = 1. We use mean square error (MSE), i.e., E[ θn θ 2], to measure the error on the SA iterates. Our results are presented in Figures 2 and 3, where each experiment is repeated 100 times.