Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games
Authors: Zuyue Fu, Zhuoran Yang, Yongxin Chen, Zhaoran Wang
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | To the best of our knowledge, this is the first success of applying model-free reinforcement learning with function approximation to discrete-time mean-field Markov games with provable non-asymptotic global convergence guarantees. |
| Researcher Affiliation | Academia | Zuyue Fu Northwestern University zuyue.fu@u.northwestern.edu Zhuoran Yang Princeton University zy6@princeton.edu Yongxin Chen Georgia Institute of Technology yongchen@gatech.edu Zhaoran Wang Northwestern University zhaoranwang@gmail.com |
| Pseudocode | Yes | Algorithm 1 Mean-Field Actor-Critic for solving LQ-MFG. Algorithm 2 Natural Actor-Critic Algorithm for D-LQR. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper is theoretical and does not describe the use of any specific publicly available datasets for training experiments. |
| Dataset Splits | No | The paper is theoretical and does not discuss training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not specify any hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and convergence proofs, not on specific experimental setup details like hyperparameters or training settings. |