Playing Card-Based RTS Games with Deep Reinforcement Learning
Authors: Tianyu Liu, Zijie Zheng, Hongchang Li, Kaigui Bian, Lingyang Song
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments are performed on Clash Royale1, a popular mobile card-based RTS game. Empirical results show that the SEAT model agent makes it to reach a high winning rate against rule-based agents and decision-treebased agent. |
| Researcher Affiliation | Collaboration | Tianyu Liu1 , Zijie Zheng1 , Hongchang Li2 , Kaigui Bian1 and Lingyang Song1 1School of EECS, Peking University 2Babeltime Technology Co. |
| Pseudocode | Yes | Algorithm 1 SEAT Model |
| Open Source Code | No | The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | No | A simulation environment of CR on Windows platform is produced for experiments of our model. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions that "The SEAT model is implemented in Py Torch" but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Hyperparameters not specified in last section include discount factor γ (set as 1), the update period N of θ = θ (set as 10) and the abandoning-rate ϵ (set as 0.0 and 0.5 for contrast experiments). |