Learning Geo-Contextual Embeddings for Commuting Flow Prediction

Authors: Zhicheng Liu, Fabio Miranda, Weiting Xiong, Junyan Yang, Qiao Wang, Claudio Silva808-816

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model using real-world dataset from New York City and the experimental results demonstrate the effectiveness of our proposed method against the state of the art.
Researcher Affiliation Academia 1School of Information Science and Engineering, Southeast University, Nanjing, China 2Tandon School of Engineering, New York University, New York, USA 3School of Architecture, Southeast University, Nanjing, China 4Media Lab, Massachusetts Institute of Technology, Boston, USA
Pseudocode Yes Algorithm 1: Training Algorithm
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes We use the 2010 New York City census tracts as geographic units (2168 units in total). For commuting trips and urban indicators, we use the following datasets: LODES The 2015 Origin-Destination Employment Statistics (LODES) dataset presents the commuting trips of interest (US Census Bureau 2015)...PLUTO The 2015 NYC Primary Land Use Tax Lot Output (PLUTO) presents the urban indicators of interest (NYC DCP 2015)...We randomly divide the commuting trips into training, validation and test datasets by 6:2:2.
Dataset Splits Yes We randomly divide the commuting trips into training, validation and test datasets by 6:2:2.
Hardware Specification Yes The experiments were executed on a Intel E5-2690 v4 2.6 GHz, 256 GB of RAM, and a NVIDIA Tesla P100 GPU with 12 GB of RAM.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al. 2017) and Deep Graph Library (Wang et al. 2019a)' but does not specify version numbers for these software dependencies.
Experiment Setup Yes Three main hyperparameters of GMEL are examined, namely the number of GAT layers, embedding size and multitask weights. The results are shown in Fig. 4.