Generating Origin-Destination Matrices in Neural Spatial Interaction Models

Authors: Ioannis Zachos, Mark Girolami, Theodoros Damoulas

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically test our framework on both synthetic and real-world data from Cambridge, UK and Washington, DC, USA. We compare GENSIT against SIM-MCMC [13], SIM-NN [17], SIT-MCMC [47] and the Geo-contextual Multitask Embedding Learner (GMEL) [27].
Researcher Affiliation Academia 1Department of Engineering, Cambridge University, Cambridge, CB2 1PZ. 2The Alan Turing Institute, London, NW1 2DB. 3Departments of Statistics & Computer Science, University of Warwick, Coventry, CV4 7AL.
Pseudocode Yes Algorithm 1 : Generating Neural Spatial Interaction Tables. O(NE(τJ + IJ))
Open Source Code Yes Codebase found at https://github.com/Yannis Za/Ge NSIT
Open Datasets Yes We empirically test our framework on both synthetic and real-world data from Cambridge, UK and Washington, DC, USA. In the Cambridge dataset, the ground truth ODM is a 69 13 contingency table with 33, 704 agents. We apply our method to the Washington dataset, where the ground truth ODM is a 179 179 contingency table with 200, 029 agents. Both y, D + have been sourced from the UK s population census dataset provided by the Office of National Statistics. Our codebase and the real-world data we used are accessible from the Supplementary Material.
Dataset Splits Yes In the case of the Washington DC data, we employ the same train/test/validation test split as in [27].
Hardware Specification Yes All experiments were run using a 32-core CPU machine with 128GB memory.
Software Dependencies No The paper mentions software like "Py Torch [26]" and "Adam optimizer [23]" but does not specify their version numbers, which is required for reproducibility.
Experiment Setup Yes The input layer is set to the observed log-destination attractions y RJ... The output layer is two-dimensional due to the parameter vector θ R2. For both datasets we set the number of hidden layers to one and number of nodes to 20. The hidden, output layers have a linear and absolute activation functions, respectively. The NN parameters W are initialised by sampling uniformly over the region [0, 4](J+1) 20+21 2. We use the Adam optimizer [23] with 0.002 learning rate. Bias is initialised uniformly at [0, 4]. an Euler-Maruyama numerical solver ϕHW is employed throughout the paper with a time discretisation step of t = 0.01 and number of steps τ = 1. The low and high SDE noise levels correspond to σ = 0.014 and σ = 0.141, respectively... we set the responsiveness parameter ϵ = 1, and the parameter δ relating to the job availability of a destination where no agents travel to 0... We follow [13, 47] in fixing σd = 0.03 and σT , σΛ to 0.07...