Convex-Concave Min-Max Stackelberg Games

Authors: Denizalp Goktas, Amy Greenwald

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We introduce two first-order methods that solve a large class of convex-concave min-max Stackelberg games, and show that our methods converge in polynomial time. Min-max Stackelberg games were first studied by Wald, under the posthumous name of Wald s maximin model, a variant of which is the main paradigm used in robust optimization, which means that our methods can likewise solve many convex robust optimization problems. We observe that the computation of competitive equilibria in Fisher markets also comprises a min-max Stackelberg game. Further, we demonstrate the efficacy and efficiency of our algorithms in practice by computing competitive equilibria in Fisher markets with varying utility structures. Our experiments suggest potential ways to extend our theoretical results, by demonstrating how different smoothness properties can affect the convergence rate of our algorithms.
Researcher Affiliation Academia Denizalp Goktas Department of Computer Science Brown University Providence, RI 02912 denizalp_goktas@brown.edu Amy Greenwald Department of Computer Science Brown University Providence, RI 02912 amy_greenwald@brown.edu
Pseudocode Yes Algorithm 1 Max-Oracle Gradient Descent Inputs: X, Y, f, g, , T, x(0) Output: (x , y ) 1: for t = 1, . . . , T do 2: Find y(t 1) 2 Y s.t. f(x(t 1), y(t 1)) V (x(t 1)) δ & g(x(t 1), y(t 1)) 0 3: Set λ(t 1) = λ (x(t 1), y(t 1)) 4: Set x(t) = X rxf(x(t 1), y(t 1)) + PK k rxgk(x(t 1), y(t 1)) 5: end for 6: Find y(T ) 2 Y s.t. f(x(T ), y(T )) V (x(T )) δ & g(x(T ), y(T )) 0 7: return (x(T ), y(T ))
Open Source Code Yes Our code can be found at https://github.com/denizalp/min-max-fisher.git.
Open Datasets No The paper describes running experiments on
Dataset Splits No The paper describes experiments on randomly initialized markets and does not mention specific train/validation/test splits for external datasets.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., specific GPU or CPU models).
Software Dependencies No The paper mentions a link to code and cites libraries like NumPy, Matplotlib, and pandas, but it does not specify version numbers for the software dependencies used in their experimental setup.
Experiment Setup Yes We include a more detailed description of our experimental setup in Appendix F. For each market, we use randomly generated parameters and run our algorithms for 500 iterations. For each utility function, we initialize the prices with two different settings: low initial prices and high initial prices.