Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data

Authors: Shuai Zheng, James Tin-Yau Kwok

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform experiments on logistic regression (Section 4.1) and AUC maximization (Section 4.2).
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong.
Pseudocode Yes Algorithm 1 Stochastic sample-average gradient (SSAG). Algorithm 2 Stochastic SAGA (S-SAGA).
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is available.
Open Datasets Yes Experiments are performed on two highdimensional data sets from the LIBSVM archive (Table 2).
Dataset Splits No Table 2 provides '#training' and '#testing' sample counts for the datasets used, but the paper does not specify the splitting methodology (e.g., exact percentages, random seed, or specific predefined splits) for training, validation, or test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiments.
Experiment Setup Yes The dropout probability p = 0.3... We vary λ {10 6, 10 7, 10 8}... We use a slightly larger βt = t 0.75... The stepsize schedule is ηt = c/(γ + t). We fix c = 2/λ for SGD, SSAG, S-SAGA, and c = 2n for S-MISO...