Feint Behaviors and Strategies: Formalization, Implementation and Evaluation
Authors: Junyu Liu, Xiangjun Peng
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results show that our design of Feint behaviors can (1) greatly improve the game reward gains; (2) significantly improve the diversity of Multi-Player Games; and (3) only incur negligible overheads in terms of time consumption. |
| Researcher Affiliation | Academia | Junyu Liu Brown University liu_junyu@brown.edu Xiangjun Peng The Chinese University of Hong Kong xjpeng@cse.cuhk.edu.hk |
| Pseudocode | Yes | Algorithm 1 in Appendix E illustrates the pseudo-code for pre-computing available Feint behavior templates given a set of available attack behaviors B. Algorithm 2 in Appendix E shows the pseudo-code for composing available Dual-Behavior models with backward searches. |
| Open Source Code | No | The NeurIPS checklist states 'No' for open access to data and code, justifying that the contribution is a formalization and implementation is based on existing frameworks, not releasing their own specific implementation code. |
| Open Datasets | Yes | Our main testbed game environment is a multi-player boxing game, which is based on Open AI s open-source environment Multi-Agent Particle Environment [23], but with heavy additional implementation to create a physically realistic scenario. We also modify and extend a strategic real-world game, Alpha Star [3], which is widely used as the experimental testbed in recent studies of Reinforcement Learning studies [28, 19]. |
| Dataset Splits | No | The paper specifies training iterations but does not explicitly mention validation dataset splits or cross-validation setup. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions that its implementation is based on 'Johannesack/tf2multiagentrl [1]', but does not specify exact version numbers for programming languages, libraries, or other key software dependencies. |
| Experiment Setup | Yes | All experiments for the two-player scenario are trained for 75,000 game iterations and all experiments for the six-player scenario are trained for 150,000 game iterations. |