Bregman Divergence for Stochastic Variance Reduction: Saddle-Point and Adversarial Prediction

Authors: Zhan Shi, Xinhua Zhang, Yaoliang Yu

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the theoretical findings through extensive experiments on two example applications: adversarial prediction and LPboosting.
Researcher Affiliation Academia Zhan Shi Xinhua Zhang University of Illinois at Chicago Chicago, Illinois 60661 {zshi22,zhangx}@uic.edu Yaoliang Yu University of Waterloo Waterloo, ON, N2L3G1 yaoliang.yu@uwaterloo.ca
Pseudocode Yes Algorithm 1: Breg-SVRG for Saddle-Point
Open Source Code No The paper does not provide any links to open-source code or explicitly state that the code will be made available.
Open Datasets Yes We experimented on the adult dataset from the UCI repository, which we partitioned into n = 32, 561 training examples and 16,281 test examples, with m = 123 features.
Dataset Splits No The paper specifies training and test sets but does not explicitly mention a validation set or its split details.
Hardware Specification No The paper does not specify any particular hardware (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes We set λ = γ = 0.01 and ν = 0.1 due to its best prediction accuracy. We tried a range of values of the step size η, and the best we found was 10 3 for Entropy-SVRG and 10 6 for Euclidean-SVRG (larger step size for Euclidean-SVRG fluctuated even worse). For both methods, m = 32561/50 gave good results. We fixed µ = 1, λ = 0.01 for the ionosphere dataset, and µ = 1, λ = 0.1 for the synthetic dataset.