A Fast Optimistic Method for Monotone Variational Inequalities

Authors: Michael Sedlmayer, Dang-Khoa Nguyen, Radu Ioan Bot

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To empirically validate our algorithm we investigate a two-player matrix game with mixed strategies of the two players. Concluding, we show promising results regarding the application of f OGDA-VI to the training of generative adversarial nets.
Researcher Affiliation Academia 1Research Network Data Science, University of Vienna, Vienna, Austria 2Faculty of Mathematics, University of Vienna, Vienna, Austria.
Pseudocode Yes Algorithm 1 f OGDA-VI
Open Source Code Yes In the following we report the code of the wrapper for the f OGDA-VI optimiser written using the Py Torch (Paszke et al., 2019) framework.
Open Datasets Yes apply our proposed algorithm f OGDA-VI to train Res Net architectures on the CIFAR-10 dataset.
Dataset Splits No The paper mentions using the CIFAR-10 dataset but does not explicitly provide details about train, validation, and test splits or how they were derived/used.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using the Py Torch framework but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup Yes In Table 4 we list the hyperparameters that were used for f OGDA-VI to obtain the results on CIFAR-10. Batch size = 128 Iterations = 500,000 Adam β1 = 0.0 Adam β2 = 0.9 Update ratio D/G = 5 Learning rate for discriminator = 1e-4 Learning rate for generator = 1e-4 f OGDA α = 100 f OGDA n = 1000