WSiP: Wave Superposition Inspired Pooling for Dynamic Interactions-Aware Trajectory Prediction
Authors: Renzhi Wang, Senzhang Wang, Hao Yan, Xiang Wang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on two public highway datasets NGSIM and high D verify the effectiveness of WSi P by comparison with current state-of-the-art baselines. |
| Researcher Affiliation | Academia | Renzhi Wang1, Senzhang Wang1*, Hao Yan1, Xiang Wang2 1Central South University 2National University of Defense Technology |
| Pseudocode | No | The paper describes its methods in text and figures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code of the work is publicly available at https://github.com/Chopin0123/WSi P. |
| Open Datasets | Yes | We use two public datasets NGSIM (Halkias and Colyar 2006) and high D (Krajewski et al. 2018) for evaluaiton. |
| Dataset Splits | Yes | We split the whole dataset into training, validation and testing sets. 70% of the data are used for training, 10% for evaluation and 20% for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions implementation details like 'implemented using Pytorch' but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | A 13 5 spatial grid is defined around the target, where each column corresponds to a single lane, and the rows are separated by a distance of 15 feet. MLP that embeds historical trajectories is composed of a fully connected layer with size 32 and Re LU as the activation function. Both encoder and decoder in our model are based on LSTM. The dimension of the hidden state for encoder LSTM is 64 and for decoder LSTM is 128. The model is implemented using Pytorch and trained in an end-to-end manner using Adam with a learning rate 0.001. |