Zero-Sum Stochastic Stackelberg Games

Authors: Denizalp Goktas, Sadie Zhao, Amy Greenwald

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we close with a series of experiments that showcase how our methodology can be used to solve the consumption-savings problem in stochastic Fisher markets. and 5 Experiments
Researcher Affiliation Academia Denizalp Goktas Department of Computer Science Brown University Providence, RI 02906, USA, Jiayi Zhao Department of Computer Science Pomona College Pomona, CA, USA, Amy Greenwald Brown University Providence, RI 02906, USA
Pseudocode Yes Algorithm 1 Value Iteration for Stochastic Fisher Market
Open Source Code Yes Our code can be found here, and details of our experimental setup can be found in Appendix E. [URL: https://github.com/Sadie-Zhao/Zero-Sum-Stochastic-Stackelberg-Games-NeurIPS]
Open Datasets No To do so, we computed the recursive Stackelberg equilibria of three different classes of stochastic Fisher markets with savings. Specifically, we created markets with three classes of utility functions, each of which endowed the state-value function with different smoothness properties.
Dataset Splits No The paper does not describe explicit training, validation, or test dataset splits in the context of data partitioning for model training or evaluation.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Details can be found in Appendix E.
Software Dependencies No To compute the min-max value of each state that we sampled, i.e., the solution to the optimization problem in line 4 of Algorithm 1, we used nested gradient descent ascent [6] which runs a step of gradient descent on the prices and a loop of gradient ascent on allocations and savings repeatedly (Algorithm 3), where we computed gradients via auto-differentiation using JAX [63] which we observed achieved better numerical stability than analytically derived gradients as can often be the case with autodifferentiation [64].
Experiment Setup Yes Specifically, we created markets with three classes of utility functions, each of which endowed the state-value function with different smoothness properties. Let ti 2 Rm be a vector of parameters, i.e., a type, that describes the utility function of buyer i 2 [n]. We considered the following (standard) utility function classes: 1. linear: ui(xi) = P j2[m] tijxij; 2. Cobb-Douglas: ui(xi) = Q ij ; and 3. Leontief: ui(xi) = minj2[m]. We ran two different experiments. First, we modeled a small stochastic Fisher market with savings without interest rates. ... Second, we modeled a larger stochastic Fisher market with savings and probabilistic interest rates. ... we chose five different equiprobable interest rates (0.9, 1.0, 1.1, 1.2, and 1.5)... and Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Details can be found in section 5 and Appendix E.