Social-DPF: Socially Acceptable Distribution Prediction of Futures
Authors: Xiaodan Shi, Xiaowei Shao, Guangming Wu, Haoran Zhang, Zhiling Guo, Renhe Jiang, Ryosuke Shibasaki2550-2557
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments over several trajectory prediction benchmarks demonstrate that our method is able to forecast socially acceptable distributions in complex scenarios. |
| Researcher Affiliation | Academia | 1Center for Spatial Information Science, the University of Tokyo 2Earth Observation Data Integration and Fusion Research Initiative, the University of Tokyo 3Information Technology Center, the University of Tokyo |
| Pseudocode | No | The paper describes its methods in detail through text and mathematical equations but does not include a distinct pseudocode block or algorithm section. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | In this section, the proposed model is evaluated on two publicly available datasets: UCY(Lerner, Chrysanthou, and Lischinski 2007) and ETH(Pellegrini et al. 2009). |
| Dataset Splits | Yes | The proposed model is trained and tested on the two datasets with leave-one-out approach: trained on four sets and tested on the remaining set. |
| Hardware Specification | Yes | The experiments are implemented using Pytorch under Ubuntu 16.04 LTS using a GTX 1080 GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'Ubuntu 16.04 LTS'. While Ubuntu has a version, Pytorch does not, and the rule requires specific version numbers for key software components to count as a 'Yes'. |
| Experiment Setup | Yes | The size of the hidden states of all LSTMs is set to 128. The embedding layers are composed of a fully connected layer with size 64 for Eq. 6 and 128 for the others. The batch size is set to 8 and all the methods are trained for 200 epochs. The optimizer RMSprop is used to train the proposed model with a learning rate 0.001. We clip the gradients of the LSTM with a maximum threshold of 10 to stabilize the training process. We set λ1 and λ2 in Eq. 11 as 0.1. The model outputs GMMs with three components. |