No-Press Diplomacy: Modeling Multi-Agent Gameplay
Authors: Philip Paquette, Yuchen Lu, SETON STEVEN BOCCO, Max Smith, Satya O.-G., Jonathan K. Kummerfeld, Joelle Pineau, Satinder Singh, Aaron C. Courville
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model is trained by supervised learning (SL) from expert trajectories, which is then used to initialize a reinforcement learning (RL) agent trained through self-play. Both the SL and RL agents demonstrate state-of-the-art No Press performance by beating popular rule-based bots. |
| Researcher Affiliation | Academia | 1 Mila, University of Montreal 2 Mila, Mc Gill University 3 University of Michigan |
| Pseudocode | No | The paper describes its architecture and processes using text, diagrams, and equations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Dataset and code can be found at https://github.com/diplomacy/research |
| Open Datasets | No | The paper states, 'We are going to release the dataset along with the game engine5.' However, footnote 5 specifies: 'Researchers can request access to the dataset by contacting webdipmod@gmail.com. An executive summary describing the research purpose and execution of a confidentiality agreement are required.' This indicates that the dataset is not publicly available without a formal request and agreement. |
| Dataset Splits | No | The paper explicitly defines a 'test set' but does not specify a separate 'validation' split or its size/proportion for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | Finally, we would like to thank John Newbury and Jason van Hal for helpful discussions on DAIDE, Compute Canada for providing the computing resources to run the experiments, and Samsung for providing access to the DGX-1 to run our experiments. |
| Software Dependencies | No | The paper mentions using an 'A2C architecture' and integrating with 'DAIDE', and a 'python package' for True Skill, but it does not provide specific version numbers for any software libraries, frameworks, or programming languages (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | We train Dip Net with self-play (same model for all powers, with shared updates) using an A2C architecture [14] with n-step (n=15) returns for approximately 20,000 updates (approx. 1 million steps). As a reward function, we use the average of (1) a local reward function (+1/-1 when a supply center is gained or lost (updated every phase and not just in Winter)), and (2) a terminal reward function (for a solo victory, the winner gets 34 points; for a draw, the 34 points are divided proportionally to the number of supply centers). The policy is pre-trained using Dip Net SL described above. |