Computing Optimal Equilibria in Repeated Games with Restarts

Authors: Ratip Emin Berker, Vincent Conitzer

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 6 Experiments We present the semi-log plots for the runtimes of Algorithms 1-3 in Figure 1. Given the number of actions n and a positive integer Maximum Payoff of Deviation (MPD), we generated a game by (for each action j 2 [n]) uniformly choosing p(j) from {i 2 Z : 0 i 30} and uniformly choosing p (j) from {i 2 Z : p(j) i MPD}.3 As expected, the runtimes of all algorithms increase with increasing n, and the runtime of FPTAS increases with decreasing
Researcher Affiliation Academia Ratip Emin Berker and Vincent Conitzer Foundations of Cooperative AI Lab (FOCAL), Computer Science Department, Carnegie Mellon University {rberker, conitzer}@cs.cmu.edu
Pseudocode Yes Algorithm 1 Dynamic Program for Opt Rep, Algorithm 2 Integer Linear Program for Opt Rep, Algorithm 3 FPTAS for Opt Rep
Open Source Code No No mention of open-source code release or repository links.
Open Datasets No Given the number of actions n and a positive integer Maximum Payoff of Deviation (MPD), we generated a game by (for each action j 2 [n]) uniformly choosing p(j) from {i 2 Z : 0 i 30} and uniformly choosing p (j) from {i 2 Z : p(j) i MPD}.
Dataset Splits No Given the number of actions n and a positive integer Maximum Payoff of Deviation (MPD), we generated a game by (for each action j 2 [n]) uniformly choosing p(j) from {i 2 Z : 0 i 30} and uniformly choosing p (j) from {i 2 Z : p(j) i MPD}.
Hardware Specification No No mention of specific GPU/CPU models, processor types, or memory amounts used for experiments.
Software Dependencies No The paper discusses algorithmic approaches (Dynamic Program, ILP, FPTAS) but does not provide specific software dependencies or version numbers used for their implementation.
Experiment Setup Yes Given the number of actions n and a positive integer Maximum Payoff of Deviation (MPD), we generated a game by (for each action j 2 [n]) uniformly choosing p(j) from {i 2 Z : 0 i 30} and uniformly choosing p (j) from {i 2 Z : p(j) i MPD}. Each data point is averaged over 5000 trials.