Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning Agent Representations for Ice Hockey

Authors: Guiliang Liu, Oliver Schulte, Pascal Poupart, Mike Rudd, Mehrsan Javan

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our player representations in three major sports analytics tasks. Our experimental results, based on a large dataset that contains over 4.5M events, show state-of-the-art performance for our Var LAE on facilitating 1) identifying the acting player, 2) estimating expected goals, and 3) predicting the ๏ฌnal score difference.
Researcher Affiliation Collaboration 1School of Computing Science, Simon Fraser University 2Cheriton School of Computer Science, University of Waterloo 3SLi Q Lab, Sportlogiq 4Vector Institute
Pseudocode No The paper includes mathematical formulations and descriptions of algorithms but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes To help level the analytics playing ๏ฌeld, we have placed our code in the public domain at github1. 1https://github.com/Guiliang/player-embedding-ice-hockey
Open Datasets No We utilize a dataset constructed by Sportlogiq. The data provides information about game events and player actions for the entire 2018-2019 National Hockey League (NHL) season, which contains 4,534,017 events, covering 31 teams, 1,196 games and 1,003 players. No information about public availability, DOI, or specific citation for public access is provided.
Dataset Splits Yes We randomly divide the dataset containing 1,196 games into a training set (80%), a validation set (10%), and a testing set (10%) and implement 5 independent runs.
Hardware Specification No Our computations were facilitated by a GPU donation from NVIDIA. This mentions a type of hardware but does not provide specific model numbers or detailed specifications required for reproducibility.
Software Dependencies No The paper describes various models and concepts but does not list specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x).
Experiment Setup No The paper states 'The Appendix gives further details about experimental settings and implementations,' implying details are not in the main text. No specific hyperparameters (e.g., learning rate, batch size, epochs) or system-level training settings are explicitly provided in the main body.