Generating Multi-Agent Trajectories using Programmatic Weak Supervision

Authors: Eric Zhan, Stephan Zheng, Yisong Yue, Long Sha, Patrick Lucey

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.
Researcher Affiliation Collaboration Eric Zhan Caltech ezhan@caltech.edu Stephan Zheng Salesforce stephan.zheng@salesforce.com Yisong Yue Caltech yyue@caltech.edu Long Sha & Patrick Lucey STATS {lsha,plucey}@stats.com
Pseudocode Yes Algorithm 1 describes LF-window25, which computes macro-intents based on last positions in 25-timestep windows (LF-window50 is similar). Algorithm 2 describes LF-stationary, which computes macro-intents based on stationary positions.
Open Source Code Yes Code is available at https://github.com/ezhan94/multiagent-programmatic-supervision.
Open Datasets Yes Dataset was provided by STATS: https://www.stats.com/data-science/.
Dataset Splits No The paper states 'There are 107,146 training and 13,845 test examples.' but does not explicitly mention a validation dataset split.
Hardware Specification No Information not found. The paper does not specify the hardware used for running its experiments.
Software Dependencies No Information not found. The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with versions).
Experiment Setup Yes We model each latent variable zk t as a multivariate Gaussian with diagonal covariance of dimension 16. All output models are implemented with memory-less 2-layer fully-connected neural networks with a hidden layer of size 200. Our agent-models sample from a multivariate Gaussian with diagonal covariance while our macro-intent models sample from a multinomial distribution over the macro-intents. All hidden states (hg,t, h1 t, . . . h K t ) are modeled with 200 2-layer GRU memory cells each. We maximize the log-likelihood/ELBO with stochastic gradient descent using the Adam optimizer (Kingma & Ba, 2015) and a learning rate of 0.0001.