Minibatch Stochastic Approximate Proximal Point Methods

Authors: Hilal Asi, Karan Chadha, Gary Cheng, John C. Duchi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We corroborate our theoretical results with extensive empirical testing, which demonstrates the gains provided by accurate modeling and minibatching.
Researcher Affiliation Academia Hilal Asi Stanford University asi@stanford.edu Karan Chadha Stanford University knchadha@stanford.edu Gary Cheng Stanford University chenggar@stanford.edu John C. Duchi Stanford University jduchi@stanford.edu
Pseudocode No The paper describes algorithms through mathematical equations and textual descriptions but does not include any clearly labeled pseudocode blocks or algorithm boxes.
Open Source Code Yes Please visit github.com/garyxcheng/parallel-aprox for the code for our methods and experiments.
Open Datasets No The paper describes generating synthetic datasets for linear, absolute loss, and logistic regression using random matrices and noise distributions (e.g., 'generate a random matrix A and x N(0, Id)', 'v N(0, In)', 'vi Lap(0, σ2 )'). It does not refer to or provide access to any specific publicly available or open datasets.
Dataset Splits No The paper describes how it generates data for experiments but does not explicitly mention any train/validation/test splits, specific percentages, or sample counts for these splits. The evaluation is based on iterations to reach epsilon accuracy on the problem instance as a whole, rather than on distinct data splits.
Hardware Specification No The paper mentions 'physical limits on processor speeds' and discusses parallelization but does not provide any specific details about the hardware used to run the experiments (e.g., specific CPU or GPU models, cloud instances).
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks like Python, PyTorch, TensorFlow, or specific solvers).
Experiment Setup Yes We use n = 1000, d = 40 minibatch sizes m {1, 4, 8, 16, 32, 64} and initial stepsizes α0 {10 2, 10 1.5, . . . , 102.5, 103} (α0 {10 2, 10 1.5, . . . , 104.5, 105} for logistic regression). For all experiments we run 30 trials with different seeds and plot the 95% confidence sets. [...] For each of the problem types, we use stepsizes αk = α0k 1/2