Learning from Group Comparisons: Exploiting Higher Order Interactions

Authors: Yao Li, Minhao Cheng, Kevin Fujii, Fushing Hsieh, Cho-Jui Hsieh

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we show that our proposed models have much better prediction power on several E-sports datasets, and furthermore can be used to reveal interesting patterns that cannot be discovered by previous methods.
Researcher Affiliation Academia Yao Li Department of Statistics University of California, Davis Minhao Cheng Department of Computer Science University of California, Los Angeles Kevin Fujii Department of Statistics University of California, Davis Fushing Hsieh Department of Statistics University of California, Davis Cho-Jui Hsieh Department of Computer Science University of California, Los Angeles
Pseudocode No The paper describes algorithmic steps and updates (e.g., for SGD), but it does not contain a structured pseudocode block or a clearly labeled "Algorithm" section.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide any links to a code repository.
Open Datasets Yes We collect the following three sets of data. For Hot S tournament matches, we download all matching records provided by Hotslog2 for the years of 2015 and 2016. For Hot S public game data, we crawl the matching history of Master players in Hotslog. ... For Dota 2, we download the recent data from Open Dota 3. ... 2https://www.hotslogs.com/Default 3https://www.opendota.com
Dataset Splits No For each dataset, we randomly divided the games into 80% for training and 20% for testing. For all the methods, we cross validate on the training set to choose the best parameter, and then use the best parameter to train a final model, which is then evaluated on the testing set.
Hardware Specification No The paper does not provide any specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes For all the methods, we cross validate on the training set to choose the best parameter, and then use the best parameter to train a final model, which is then evaluated on the testing set. For our model, determining the values of k is a trade-off between the model efficiency and accuracy. In our experiments, we choose k by cross validation.