Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Policy Gradient Methods Find the Nash Equilibrium in N-player General-sum Linear-quadratic Games
Authors: Ben Hambly, Renyuan Xu, Huining Yang
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate our results with numerical experiments to show that even in situations where the policy gradient method may not converge in the deterministic setting, the addition of noise leads to convergence. |
| Researcher Affiliation | Academia | Mathematical Institute University of Oxford, Department of Industrial Systems and Engineering University of Southern California, Department of Operations Research and Financial Engineering Princeton University |
| Pseudocode | Yes | Algorithm 1 Natural Policy Gradient Method with Known Parameters |
| Open Source Code | No | The paper does not provide explicit access information or links to open-source code for the methodology described. |
| Open Datasets | No | We apply the natural policy gradient algorithm with unknown parameters to a two-player LQ game example with synthetic data consisting of a two-dimensional state variable and a one-dimensional control variable. |
| Dataset Splits | No | The paper uses synthetic data generated based on specified parameters and initial state distributions for its numerical experiments, rather than external datasets requiring explicit train/test/validation splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU or CPU models, or cloud resources) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details (e.g., library or solver names with version numbers) used to replicate the experiments. |
| Experiment Setup | Yes | The natural policy gradient algorithm shows a reasonable level of accuracy within 1000 iterations (that is, the normalized error is less than 0.5%) for both players under di๏ฌerent levels of system noise ฯ2, which ranges from 0 (deterministic dynamics) to 10. See Figure 1 for the case where r = 0.25 and Figure 2 for the case where r = 0.30. |