Collaborative Uncertainty in Multi-Agent Trajectory Forecasting

Authors: Bohan Tang, Yiqi Zhong, Ulrich Neumann, Gang Wang, Siheng Chen, Ya Zhang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first use two self-generated synthetic datasets with a limited number of agents as the toy version of the real world problem. We use the simplified datasets to test our method s ability of capturing distribution information of the input data that obeys a certain type of multivariate distribution. We then conduct extensive experiments on two published benchmarks to prove the value of our proposed method in solving real world problems. We introduce the experiments in Sec. 4.1 and Sec. 4.2.
Researcher Affiliation Academia 1Shanghai Jiao Tong University 2University of Southern California 3Beijing Institute of Technology
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for their methodology is open-source or publicly available.
Open Datasets Yes Argoverse [42] and nu Scenes [47] are two widely used multi-agent trajectory forecasting benchmarks.
Dataset Splits Yes For Argoverse, The sequences are split as training, validation and test sets, which have 205942, 39472 and 78143 sequences respectively. nu Scenes ... The prediction instances are split as training, validation and test sets, which have 32186, 8560 and 9041 instances respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory amounts, or cloud instance specifications) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library or solver names with version numbers (e.g., PyTorch 1.9, Python 3.8).
Experiment Setup No The paper describes aspects of the model implementation, such as using 'four-layer multilayer perceptrons (MLPs)' for decoders, but it does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed system-level training configurations.