Off-Agent Trust Region Policy Optimization

Authors: Ruiqing Chen, Xiaoyuan Zhang, Yali Du, Yifan Zhong, Zheng Tian, Fanglei Sun, Yaodong Yang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on the Star Craft II Multi-Agent Challenge (SMAC) and Google Research Football (GRF) demonstrate that our algorithms outperform state-of-the-art (SOTA) methods and achieve faster convergence, suggesting the viability of our approach for efficient experience reusing in MARL.
Researcher Affiliation Academia Ruiqing Chen1,2, , Xiaoyuan Zhang1, , Yali Du3, Yifan Zhong1, Zheng Tian2, Fanglei Sun2 and Yaodong Yang1, 1Institute for AI, Peking University, Beijing, China 2Shanghai Tech University, Shanghai, China 3King s College London, UK yaodong.yang@pku.edu.cn
Pseudocode Yes Algorithm 1 Off-agent Policy Iteration with Approximate Monotonic Improvement
Open Source Code No The paper does not provide an explicit statement or link to its open-source code.
Open Datasets Yes Experiments conducted on the Star Craft II Multi-Agent Challenge (SMAC) and Google Research Football (GRF) demonstrate that our algorithms outperform state-of-the-art (SOTA) methods and achieve faster convergence... SMAC [Samvelyan et al., 2019]... GRF [Kurach et al., 2020]
Dataset Splits No The paper does not explicitly provide details about training/validation/test dataset splits within the main text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers).
Experiment Setup No The paper states 'Experiment details are in Appendix I/J/K' but does not include specific experimental setup details (like hyperparameter values or training configurations) in the main text.