Generating Long-term Trajectories Using Deep Hierarchical Networks

Authors: Stephan Zheng, Yisong Yue, Jennifer Hobbs

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase our approach in a case study on learning to imitate demonstrated basketball trajectories, and show that it generates significantly more realistic trajectories compared to non-hierarchical baselines as judged by professional sports analysts. We applied our approach to modeling basketball behavior data. We validated the hierarchical policy network (HPN) by learning a movement policy of individual basketball players that predicts as the micro-action the instantaneous velocity vi t = πmicro(st, ht). Training data. We trained the HPN on a large dataset of tracking data from professional basketball games (Yue et al. [16]).
Researcher Affiliation Collaboration Stephan Zheng Caltech stzheng@caltech.edu Yisong Yue Caltech yyue@caltech.edu Patrick Lucey STATS plucey@stats.com
Pseudocode No The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper mentions using a 'large dataset of tracking data from professional basketball games (Yue et al. [16])' but does not provide concrete access information (e.g., a direct link, DOI, repository name, or explicit statement of public availability) for this dataset.
Dataset Splits No The paper states 'we extracted 130,000 tracks for training and 13,000 as a holdout set' but does not specify a separate validation set or detailed split percentages for training, validation, and testing.
Hardware Specification No The paper mentions 'a GPU donation (Tesla K40 and Titan X) by NVIDIA' in the acknowledgments, but it does not explicitly state that these specific GPUs were used for running the experiments or provide other details like CPU, memory, or specific computing environments used for the work.
Software Dependencies No The paper mentions the use of 'GRU memory cells' and 'Batch-normalization' as techniques, but it does not provide specific software names with version numbers (e.g., Python version, specific deep learning framework like TensorFlow or PyTorch with versions) that would be needed for replication.
Experiment Setup No The paper describes some setup details such as spatial discretization (400x380 cells of 0.25ft x 0.25ft), input sequence length (50 frames), and architectural elements (GRU, 2-layer fully-connected network). It also mentions predicting 'the next 4 micro-actions'. However, it does not provide concrete hyperparameter values such as learning rate, batch size, optimizer settings, or number of training epochs.