BCDiff: Bidirectional Consistent Diffusion for Instantaneous Trajectory Prediction

Authors: Rongqing Li, Changsheng Li, Dongchun Ren, Guangyi Chen, Ye Yuan, Guoren Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our proposed BCDiff significantly improves the accuracy of instantaneous trajectory prediction on the ETH/UCY and Stanford Drone datasets, compared to related approaches.
Researcher Affiliation Collaboration Rongqing Li Beijing Institute of Technology lirongqing99@gmail.com Changsheng Li Beijing Institute of Technology lcs@bit.edu.cn Dongchun Ren ALLRIDE.AI Dongchun.ren@allride.ai Guangyi Chen CMU & MBZUAI guangyichen1994@gmail.com Ye Yuan Beijing Institute of Technology yuan-ye@bit.edu.cn Guoren Wang Beijing Institute of Technology wanggrbit@126.com
Pseudocode No The paper describes the model's architecture and procedures in detail, including figures like 'Structure of BCDUnit', but it does not present any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing the source code for the proposed BCDiff framework, nor does it provide a link to a code repository.
Open Datasets Yes Dataset. We verify the effectiveness of our proposed method on the widely used ETH/UCY [37, 26] and Stanford Drone [38] Dataset (SDD).
Dataset Splits Yes We follow the widely used leave-one-scene-out protocol, i.e., the models are trained on 4 scenes and tested on the remaining one [15, 42].
Hardware Specification No The paper does not specify any hardware details such as GPU/CPU models, memory, or specific computing environments used for running the experiments.
Software Dependencies No The paper refers to various models and architectures (e.g., 'Trajectron++', 'Bi-LSTM', 'Transformer') but does not specify version numbers for any software dependencies, libraries, or programming languages used in the implementation.
Experiment Setup Yes In the instantaneous trajectory prediction setting, the observations are reduced to 2 frames, and the length of future predictions is 12 frames. Following previous works [46, 15, 31], we sample 20 future predicted trajectories, and report the final error by the minimum error over all predicted trajectories.