Prediction of Spatial Point Processes: Regularized Method with Out-of-Sample Guarantees

Authors: Muhammad Osama, Dave Zachariah, Peter Stoica

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The method is demonstrated using synthetic as well as real spatial data.
Researcher Affiliation Academia Muhammad Osama muhammad.osama@it.uu.se Dave Zachariah dave.zachariah@it.uu.se Peter Stoica peter.stoica@it.uu.se *Division of System and Control, Department of Information Technology, Uppsala University
Pseudocode Yes Algorithm 1 Conformal intensity interval; Algorithm 2 Majorization-minimization method
Open Source Code Yes The code for algorithms 1 and 2 are provided on github.
Open Datasets Yes First, we consider the hickory trees data set [1] which consists of coordinates of hickory trees in a spatial domain X Ă R2 [...] [1] P. J. Diggle @ lancaster university. https://www.lancaster.ac.uk/staff/diggle/pointpatternbook/datasets/. [...] Next we consider crime data in Portland police districts [16, 10] which consists of locations of calls-of-service received by Portland Police between January and March 2017 [...] [16] National Institute of Justice. Real-time crime forecasting challenge posting. https://nij.gov/funding/Pages/fy16-crime-forecasting-challenge-document.aspx#data.
Dataset Splits No The paper does not explicitly provide specific training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split citations). It refers to 'out-of-sample' performance but no partitioning details.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper mentions that code is provided on GitHub but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper mentions specific regularization weights (γ=0.499, γ=0.4) in its numerical experiments, but it does not provide comprehensive experimental setup details such as learning rates, batch sizes, optimizer settings, or other hyperparameters required for full reproducibility.