Automatic Integration for Spatiotemporal Neural Point Processes

Authors: Zihao Zhou, Rose Yu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the performances of different neural STPPs using synthetic and real-world benchmark data. For synthetic data, our goal is to validate our Auto STPP can accurately recover complex intensity functions. Additionally, we show that the errors resulting from numerical integration lead to a higher variance in the learned intensity than closed-form integration. We show that our model performs better or on par with the state-of-the-art methods for real-world data.
Researcher Affiliation Academia Zihao Zhou Department of Computer Science University of California, San Diego La Jolla, CA 92092 ziz244@ucsd.edu Rose Yu Department of Computer Science University of California, San Diego La Jolla, CA 92092 roseyu@ucsd.edu
Pseudocode Yes We have developed a program that harnesses the power of dynamical programming to compute derivatives efficiently with Auto Diff. See Appendix D for the detailed algorithm.
Open Source Code Yes Our code is open-source at https://github.com/Rose-STL-Lab/Auto STPP.
Open Datasets Yes We use six synthetic point process datasets simulated using Ogata s thinning algorithm [Chen, 2016], see Appendix B for details. We follow the experiment design of Chen et al. [2020] and use two of the real-world datasets, Earthquake Japan and COVID New Jersey.
Dataset Splits Yes Each dataset was divided into a training, validation, and testing set in an 8 : 1 : 1 ratio based on the time range. We use 40 sequences for training, 5 for validation, and 5 for testing.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, memory specifications, or cloud computing instance types.
Software Dependencies No The paper mentions comparing its efficient implementation with "PyTorch naive Auto Grad" (Figure 4 caption), implying the use of PyTorch. However, it does not specify exact version numbers for PyTorch or any other software libraries, frameworks, or dependencies used in the experiments, which is necessary for reproducibility.
Experiment Setup Yes Table 5: Hyperparameter settings for training Auto STPP on all datasets. Name: Optimizer, Value: Adam. Name: Learning rate, Value: Depends on dataset, [0.0002, 0.004]. Name: Momentum, Value: 0.9. Name: Batch size, Value: 128. Name: Activation, Value: tanh. Name: N, Value: 2 / 10. Name: bias, Value: true.