The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization

Authors: Constantinos Daskalakis, Ioannis Panageas

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we provide two examples/experiments, one 2-dimensional (function f : R2 R, x, y R) and one higher dimensional (f : R10 R, x, y R5). The purpose of these experiments is to get better intuition about our findings.
Researcher Affiliation Academia Constantinos Daskalakis CSAIL MIT Cambridge, MA 02138 costis@csail.mit.edu Ioannis Panageas ISTD SUTD Singapore, 487371 ioannis@sutd.edu.sg
Pseudocode No The paper provides mathematical equations for the GDA and OGDA dynamics, but no pseudocode or algorithm blocks are present.
Open Source Code No The paper does not contain any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets No The paper constructs specific polynomial functions for its examples and generates random initializations, rather than using or providing concrete access information for a publicly available dataset.
Dataset Splits No The paper describes using '10000 random initializations' in its experiments, but does not provide specific train/validation/test dataset splits or references to predefined splits, as it generates synthetic initial conditions rather than using a standard dataset.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers, that would be needed to replicate the experiments.
Experiment Setup No The paper mentions '10000 random initializations' and uses 'α = 0.001' for an illustration, but it does not provide comprehensive specific experimental setup details such as concrete hyperparameter values, optimizer settings, or training configurations.