Learning in two-player zero-sum partially observable Markov games with perfect recall
Authors: Tadashi Kozuno, Pierre Ménard, Remi Munos, Michal Valko
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study the problem of learning a Nash equilibrium (NE) in an imperfect information game (IIG) through self-play. Precisely, we focus on two-player, zero-sum, episodic, tabular IIG under the perfect-recall assumption where the only feedback is realizations of the game (bandit feedback). In particular, the dynamics of the IIG is not known we can only access it by sampling or interacting with a game simulator. For this learning setting, we provide the Implicit Exploration Online Mirror Descent (IXOMD) algorithm. It is a model-free algorithm with a high-probability bound on the convergence rate to the NE of order 1/ T where T is the number of played games. Moreover, IXOMD is computationally efficient as it needs to perform the updates only along the sampled trajectory. |
| Researcher Affiliation | Collaboration | Tadashi Kozuno University of Alberta tadashi.kozuno@gmail.com Pierre Ménard Otto von Guericke Universität Magdeburg pierre.menard@ovgu.de Rémi Munos Deep Mind Paris munos@deepmind.com Michal Valko Deep Mind Paris valkom@deepmind.com |
| Pseudocode | Yes | Algorithm 1: IXOMD for the Max-Player Input: IX hyper-parameter γ (0, ) and OMD s learning rate η (0, ). Output: A near-NE policy for the max-player. |
| Open Source Code | No | The paper does not contain any statements or links indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper describes a theoretical game setting ('Partially observable Markov game (POMG)') and an algorithm for self-play within it, rather than using or providing access to a publicly available dataset for training. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments with traditional datasets, therefore, no specific dataset split information for training, validation, or testing is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe empirical experiments, therefore, no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not describe specific software implementations with version numbers. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and theoretical analysis; therefore, it does not include details on experimental setup, hyperparameters, or training configurations. |