Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Prediction of Spatial Point Processes: Regularized Method with Out-of-Sample Guarantees

Authors: Muhammad Osama, Dave Zachariah, Peter Stoica

NeurIPS 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The method is demonstrated using synthetic as well as real spatial data.
Researcher Affiliation Academia Muhammad Osama EMAIL Dave Zachariah EMAIL Peter Stoica EMAIL *Division of System and Control, Department of Information Technology, Uppsala University
Pseudocode Yes Algorithm 1 Conformal intensity interval; Algorithm 2 Majorization-minimization method
Open Source Code Yes The code for algorithms 1 and 2 are provided on github.
Open Datasets Yes First, we consider the hickory trees data set [1] which consists of coordinates of hickory trees in a spatial domain X Ă R2 [...] [1] P. J. Diggle @ lancaster university. https://www.lancaster.ac.uk/staff/diggle/pointpatternbook/datasets/. [...] Next we consider crime data in Portland police districts [16, 10] which consists of locations of calls-of-service received by Portland Police between January and March 2017 [...] [16] National Institute of Justice. Real-time crime forecasting challenge posting. https://nij.gov/funding/Pages/fy16-crime-forecasting-challenge-document.aspx#data.
Dataset Splits No The paper does not explicitly provide specific training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split citations). It refers to 'out-of-sample' performance but no partitioning details.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper mentions that code is provided on GitHub but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper mentions specific regularization weights (γ=0.499, γ=0.4) in its numerical experiments, but it does not provide comprehensive experimental setup details such as learning rates, batch sizes, optimizer settings, or other hyperparameters required for full reproducibility.