Modeling Trajectories with Recurrent Neural Networks
Authors: Hao Wu, Ziyang Chen, Weiwei Sun, Baihua Zheng, Wei Wang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental study based on real taxi trajectory datasets shows that both of our approaches largely outperform the existing approaches. |
| Researcher Affiliation | Academia | School of Computer Science, Fudan University, Shanghai, China Shanghai Key Laboratory of Data Science, Fudan University, Shanghai, China Singapore Management University, Singapore |
| Pseudocode | No | The paper does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. Methods are described through mathematical formulations and textual explanations. |
| Open Source Code | Yes | The source code is available at https://github.com/wuhao5688/RNN-Traj Model. |
| Open Datasets | Yes | The Porto dataset is a 1.8GB open dataset (http://www.kaggle.com/c/pkdd-15-predicttaxi-service-trajectory-i) |
| Dataset Splits | Yes | We split the dataset in the ratio of 8:1:1 to get the training set, validation set and test set. |
| Hardware Specification | Yes | For the hardware environment, we use one Nvidia GTX 1080 GPU and an Intel Core i7-6700K CPU to run the model. |
| Software Dependencies | No | The paper mentions using LSTM and RMSProp algorithms but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used for implementation. |
| Experiment Setup | Yes | For both models, we set the embedding size of input state as 400 for PTsmall and SHsmall, and 600 for SHlarge and PTlarge. We also set the dimension of destination state embedding to be the same as that of input state. We set the hidden unit of LSTM to 400 600 for different models and the dropout rate for LSTM to be 0.1 using the strategy in [Zaremba et al., 2014]. We train the model using RMSProp algorithm [Hinton, 2012] with a learning rate at 1e-4 and the decay rate at 0.9. We clip the gradient by norm to 1.0 [Gustavsson et al., 2012] and uniformly initialize the embeddings and parameters by [ 0.03, 0.03]. For LPIRNN, we set the size of fully connected layer to 200. |