Adversarial Policy Learning in Two-player Competitive Games
Authors: Wenbo Guo, Xian Wu, Sui Huang, Xinyu Xing
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate our proposed learning algorithm by using five selected games (i.e., four Mu Jo Co games and Star Craft II). |
| Researcher Affiliation | Collaboration | 1College of Information Sciences and Technology, The Pennsylvania State University, State College, PA, USA 2Netflix Inc., Los Gatos, CA, USA. |
| Pseudocode | No | The paper describes the learning algorithm in text and mathematical equations but does not include a structured pseudocode or algorithm block in the main paper. |
| Open Source Code | Yes | We released our source code to support future research.1 https://github.com/psuwuxian/rl_adv_ valuediff |
| Open Datasets | Yes | In this section, we evaluate our proposed learning algorithm by using five selected games (i.e., four Mu Jo Co games and Star Craft II). (Todorov et al., 2012) (Sun et al., 2018) |
| Dataset Splits | No | The paper describes training adversarial agents in game environments but does not provide specific train/validation/test dataset splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Due to space limit, we specify the implementation details and experiment setup (i.e., game and victim policy selection, evaluation metric, hyperparameters) in Supplementary Section S5. |