Uncertainty on Asynchronous Time Event Prediction
Authors: Marin Biloš, Bertrand Charpentier, Stephan Günnemann
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our models on large-scale synthetic and real world data. We compare to neural point process models: RMTPP [6] and Neural hawkes process [18]. Additionally, we use various RNN models with the knowledge of the time of the next event. We measure the accuracy of class prediction, accuracy of time prediction, and evaluate on an anomaly detection task to show prediction uncertainty. |
| Researcher Affiliation | Academia | Marin Biloš , Bertrand Charpentier , Stephan Günnemann Technical University of Munich, Germany |
| Pseudocode | No | The paper describes its models and methods textually and with diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and further supplementary material is available online.2 |
| Open Datasets | Yes | We use the following data (more details in Appendix G): (1) Graph. We generate data from a directed Erd os Rényi graph where nodes represent the states and edges the weighted transitions between them. The time it takes to cross one edge is modelled with one normal distribution per edge. By randomly walking along this graph we created 10K asynchronous events with 10 unique classes. (2) Stack Exchange.3 Sequences contain rewards as events that users get for participation on a question answering website. After preprocessing according to [6] we have 40 classes and over 480K events spread over 2 years of activity of around 6700 users. (3) Smart Home [1].4 We use a recorded sequence from a smart house with 14 classes and over 1000 events. (4) Car Indicators. We obtained a sequence of events from car s indicators that has around 4000 events with 12 unique classes. |
| Dataset Splits | Yes | We split the data into train, validation and test set (60% 20% 20%) and tune all models on a validation set using grid search over learning rate, hidden state dimension and L2 regularization. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | We split the data into train, validation and test set (60% 20% 20%) and tune all models on a validation set using grid search over learning rate, hidden state dimension and L2 regularization. |