Stability and Generalization of Stochastic Gradient Methods for Minimax Problems

Authors: Yunwen Lei, Zhenhuan Yang, Tianbao Yang, Yiming Ying

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report preliminary experimental results to verify our theory.In this subsection, we report preliminary experimental results to validate our theoretical results. We consider two datasets available at the LIBSVM website: svmguide3 and w5a (Chang & Lin, 2011).
Researcher Affiliation Academia 1School of Computer Science, University of Birmingham, Birmingham B15 2TT, UK 2Department of Mathematics and Statistics, State University of New York at Albany, USA 3Department of Computer Science, The University of Iowa, Iowa City, IA 52242, USA.
Pseudocode No The paper describes algorithmic updates such as '( wt+1 = Proj W wt ηt wf(wt, vt; zit) , vt+1 = Proj V vt + ηt vf(wt, vt; zit) . (4.1)' but does not present them in a formally labeled 'Algorithm' or 'Pseudocode' block.
Open Source Code Yes The source codes are available at https://github.com/zhenhuan-yang/minimax-stability.
Open Datasets Yes We consider two datasets available at the LIBSVM website: svmguide3 and w5a (Chang & Lin, 2011).
Dataset Splits No The paper describes the datasets used for experiments but does not provide details on training, validation, or test splits, nor does it mention cross-validation.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We consider step sizes ηt = η/ T with η {0.1, 0.3, 1, 3}. We repeat the experiments 25 times and report the average of the experimental results as well as the standard deviation.