TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents
Authors: Yuexin Ma, Xinge Zhu, Sibo Zhang, Ruigang Yang, Wenping Wang, Dinesh Manocha6120-6127
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In order to evaluate its performance, we collected trajectory datasets in a large city consisting of varying conditions and traffic densities. The dataset includes many challenging scenarios where vehicles, bicycles, and pedestrians move among one another. We evaluate the performance of Traffic Predict on our new dataset and highlight its higher accuracy for trajectory prediction by comparing with prior prediction methods. |
| Researcher Affiliation | Collaboration | Baidu Research, Baidu Inc.1, The University of Hong Kong2, The Chinese University of Hong Kong3, University of Maryland at College College4 |
| Pseudocode | No | The paper describes the algorithm details in text and uses diagrams, but does not include formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions releasing a dataset, not the source code for the proposed methodology. 'Our new dataset has been released over the WWW (Apolloscape 2018).' |
| Open Datasets | Yes | Our new dataset has been released over the WWW (Apolloscape 2018). Apolloscape. 2018. Trajectory dataset for urban traffic. http://apolloscape.auto/trajectory.html. |
| Dataset Splits | No | The paper states, 'We train the model by minimizing the loss for all the trajectories in the training dataset.' however, it does not specify explicit training/validation/test splits by percentage or absolute sample counts. |
| Hardware Specification | Yes | The model is trained on a single Tesla K40 GPU with a batch size of 8. |
| Software Dependencies | No | The paper mentions 'Adam (Kingma and Ba 2014) optimization' but does not list specific software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In our evaluation benchmarks, the dimension of hidden state of spatial and temporal edge cell is set to 128 and that of node cell is 64 (for both instance layer and category layer). We also apply the fixed input dimension of 64 and attention layer of 64. During training, Adam (Kingma and Ba 2014) optimization is applied with β1=0.9 and β2=0.999. Learning rate is scheduled as 0.001 and a staircase weight decay is applied. |