Imputing Missing Events in Continuous-Time Event Streams

Authors: Hongyuan Mei, Guanghui Qin, Jason Eisner

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment in multiple synthetic and real domains, using different missingness mechanisms, and modeling the complete sequences in each domain with a neural Hawkes process (Mei & Eisner, 2017). On held-out incomplete sequences, our method is effective at inferring the groundtruth unobserved events, with particle smoothing consistently improving upon particle filtering.
Researcher Affiliation Academia 1Department of Computer Science, Johns Hopkins University, USA 2Department of Physics, Peking University, China. Correspondence to: Hongyuan Mei <hmei@cs.jhu.edu>.
Pseudocode Yes Full details are spelled out in Algorithm 1 in Appendix C. ... Algorithm 2 in Appendix E uses dynamic programming to compute the loss (10) and its corresponding alignment a... Our heuristic (Algorithm 3 of Appendix F) seeks to iteratively improve ˆz...
Open Source Code Yes Py Torch code can be found at https://github.com/ HMEIat JHU/neural-hawkes-particle-smoothing.
Open Datasets Yes Elevator System Dataset (Crites & Barto, 1996). ... New York City Taxi Dataset (Whong, 2014).
Dataset Splits Yes For each of the datasets, we possess fully observed data that we use to train the model and the proposal distribution. ... and training is early-stopped when the divergence stops decreasing on the held-out development set. For each dev and test example, we censored out some events from the fully observed sequence
Hardware Specification Yes We also thank NVIDIA Corporation for kindly donating two Titan X Pascal GPUs and the state of Maryland for the Maryland Advanced Research Computing Center.
Software Dependencies No The paper states 'Py Torch code can be found at...' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup No The paper states: 'See Appendix G for training details (e.g., hyperparameter selection).' This indicates that such details are present but are deferred to an appendix, not explicitly in the main text.