Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Multi-agent Inverse Reinforcement Learning for Certain General-sum Stochastic Games
Authors: Xiaomin Lin, Stephen C. Adams, Peter A. Beling
JAIR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results are validated on various benchmark grid-world games. ... Section 5 and Section 6 demonstrate our approaches through several benchmark experiments that include comparison with existing MIRL algorithms. ... Numerical results are summarized in Table 1 and Table 2. ... This section presents a demonstration of the adv E-MIRL approach on a stylized soccer game. ... This section evaluates the adv E-MIRL approach. ... The simulation results are presented in Tables 4–7. |
| Researcher Affiliation | Collaboration | Xiaomin Lin EMAIL Data Science, Mass Mutual Financial Group 470 Atlantic Ave., Boston, MA 02210 USA University of Virginia Charlottesville, VA 22904 USA Stephen C. Adams EMAIL Peter A. Beling EMAIL University of Virginia Charlottesville, VA 22904 USA |
| Pseudocode | Yes | Algorithm 1 General Multi-Q-learning algorithm |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the methodology described, nor does it include a link to a code repository or mention code in supplementary materials. |
| Open Datasets | No | This paper uses 'benchmark grid-world games' (GG1 and GG2) and an 'abstract soccer game' which are simulated environments defined by rules. While these are common game *setups* in MRL research, the paper does not provide specific data files, links, or citations for pre-existing datasets used in the traditional sense. Instead, it describes the game rules and performs simulations. |
| Dataset Splits | No | The paper conducts experiments in simulated game environments (grid-world and abstract soccer games) and performs Monte Carlo simulations. It does not describe standard dataset splits (e.g., train/test/validation percentages or specific file partitions) for a pre-existing dataset. Section 7 mentions randomly picking 'k' states for an incomplete policy experiment, which is a specific experimental design for robustness, not a general dataset split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments or simulations. |
| Software Dependencies | No | The paper mentions implementing Q-learning algorithms and using linear programming to solve problems. However, it does not specify any particular software libraries, frameworks, or solvers with version numbers (e.g., Python, PyTorch, CPLEX version) that were used. |
| Experiment Setup | Yes | Algorithm 1 outlines the 'General Multi-Q-learning algorithm' with parameters such as 'α: learning rate'. Section 6.1, 'Prior Specification', details the use of 'Multivariate Gaussian distributions' as priors with specific settings for 'Weak Mean (WM)', 'Medium Mean (MM)', 'Strong Mean (SM)', 'Weak Covariance (WC)', and 'Strong Covariance (SC)'. Section 6.2 mentions that '5000 round games are simulated per case' and 'ball exchange rates β are 0, 0.4 and 1'. |