NeuralAC: Learning Cooperation and Competition Effects for Match Outcome Prediction

Authors: Yin Gu, Qi Liu, Kai Zhang, Zhenya Huang, Runze Wu, Jianrong Tao4072-4080

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the performances of Neural AC, we conduct extensive experiments on four E-sports datasets. The experimental results clearly verify the effectiveness of Neural AC compared with several state-of-the-art methods.
Researcher Affiliation Collaboration Yin Gu1, Qi Liu1 , Kai Zhang1, Zhenya Huang1, Runze Wu2, Jianrong Tao2 1 Anhui Province Key Laboratory of Big Data Analysis and Application, School of Data Science & School of Computer Science and Technology, University of Science and Technology of China 2 Fuxi AI Lab, Net Ease Inc., Hangzhou, China
Pseudocode No The paper describes the Neural AC model and its mathematical formulations (e.g., equations 2-10) but does not include any explicitly labeled pseudocode block or algorithm steps formatted as code.
Open Source Code Yes The code and datasets are avaliable at https://github.com/bigdata-ustc/NAC.
Open Datasets Yes We use four E-sports datasets to evaluate the utility of our model. The basic statistics of all the datasets are summarized in Table 1. Dota2 is a famous Multiplayer Online Battle Arena (MOBA) game. We downloaded ranked matches from yasp.co1 and Varena2, which were played in the years of 2015 and 2018 respectively. League of Legends (LOL)... We crawled the recent matches from Riot Game3. Teamfight Tactics (TFT)... We crawl the ranked TFT match records via Riot Game API. The code and datasets are avaliable at https://github.com/bigdata-ustc/NAC.
Dataset Splits Yes For every dataset, we randomly divided samples into 80% for training, 10% for validating, and 10% for testing.
Hardware Specification Yes All experiments are implemented by Python and are trained on a Linux server with Intel Xeon E5-2650 CPUs and a TITAN Xp GPU.
Software Dependencies Yes HOI, Neural AC and Opt Match are implemented by Py Torch package (Paszke et al. 2019). LR, True Skill and LGB are implemented by open source packages sklearn, trueskill, Light GBM, respectively.
Experiment Setup Yes For Neural AC model, the dimension of hidden layers is set to 50, and Re Lu is used as activation function. We initialize the parameter with Kaiming initialization (He et al. 2015). Besides, Dropout (Srivastava et al. 2014) technique is also applied with the drop probability set to 0.2. We choose Adam (Kingma and Ba 2014) as the optimizer, with 0.001 of learning rate and 0.0001 of weight decay coefficient, for HOI, Opt Match and Neural AC. Besides, the batch size is set to 256 for HOI, Opt Match and Neural AC on all datasets.