Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity
Authors: Kaiqing Zhang, Sham Kakade, Tamer Basar, Lin Yang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | As a theory-oriented work, we do not believe that our research will cause any ethical issue, or put anyone at any disadvantage. |
| Researcher Affiliation | Collaboration | Kaiqing Zhang ECE and CSL University of Illinois at Urbana-Champaign kzhang66@illinois.edu Sham M. Kakade CS and Statistics University of Washington Microsoft Research sham@cs.washington.edu Tamer Ba sar ECE and CSL University of Illinois at Urbana-Champaign basar1@illinois.edu Lin F. Yang ECE University of California, Los Angeles linyang@ee.ucla.edu |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | The paper is theoretical and does not use or reference any datasets. |
| Dataset Splits | No | The paper is theoretical and does not report on empirical experiments, thus no dataset splits for training, validation, or testing are mentioned. |
| Hardware Specification | No | The paper is theoretical and does not describe any hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup, hyperparameters, or training configurations. |