Stochastic Hamiltonian Gradient Methods for Smooth Games

Authors: Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, Ioannis Mitliagkas

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We supplement our analysis with experiments on stochastic bilinear and sufficiently bilinear games, where our theory is shown to be tight, and on simple adversarial machine learning formulations. (Abstract)
Researcher Affiliation Collaboration 1Mila, Universit e de Montr eal Canada CIFAR AI Chair 2Facebook AI Research. Correspondence to: Nicolas Loizou <loizouni@mila.quebec>.
Pseudocode Yes Algorithm 1 Stochastic Hamiltonian Gradient Descent (SHGD); Algorithm 2 Loopless Stochastic Variance Reduced Hamiltonian Gradient (L-SVRHG); Algorithm 3 L-SVRHG (with Restart)
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes In Gaussian GAN, we have a dataset of real data xreal and latent variable z from a normal distribution with mean 0 and standard deviation 1. (Section 7.2). For Bilinear Games: we choose n = d1 = d2 = 100, [Ai]kl = 1 if i = k = l and 0 otherwise, and [bi]k, [ci]k N(0, 1/n). (Section 7.1)
Dataset Splits No The paper does not specify training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies (e.g., library names with version numbers) needed to replicate the experiments.
Experiment Setup Yes We provide further details about the experiments and choice of hyperparameters for the different methods in Appendix F. (Section 7)