Neural Datalog Through Time: Informed Temporal Modeling via Logical Specification

Authors: Hongyuan Mei, Guanghui Qin, Minjie Xu, Jason Eisner

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In both synthetic and realworld domains, we show that neural probabilistic models derived from concise Datalog programs improve prediction by encoding appropriate domain knowledge in their architecture. In our experiments, we show how to write down some domain-specific models for irregularly spaced event se quences in continuous time, and demonstrate that their structure improves their ability to predict held-out data. Section 6 is titled “Experiments” and contains performance graphs and comparisons.
Researcher Affiliation Collaboration 1Computer Science Dept., Johns Hopkins Univ. 2Bloomberg LP.
Pseudocode No The paper describes the model’s rules and formulas but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our implementation, which can be found at https://github. com/HMEIat JHU/neural-datalog-through-time.
Open Datasets Yes Our code and datasets are available at the URL given in 2. and IPTV Domain (Xu et al., 2018). and Robo Cup Domain (Chen & Mooney, 2008).
Dataset Splits No We then generated learning curves by training the correclty structured model versus the standard NHP on increasingly long prefixes of the training set, and evaluating them on held-out data. In the IPTV domain, we randomly split training/test events based on a 90/10 ratio within each user's event sequence. While train/test splits are mentioned, a separate validation split is not explicitly defined with specific percentages or counts.
Hardware Specification Yes We thank NVIDIA Corporation for kindly donating two Titan X Pascal GPUs, and the state of Maryland for the Maryland Advanced Research Computing Center.
Software Dependencies No We implemented our NDTT framework using Py Torch (Paszke et al., 2017) and py Datalog (Carbonell et al., 2016). The versions of these software dependencies are not specified.
Experiment Setup Yes We train our models with Adam (Kingma & Ba, 2015) using a batch size of 20. and The learning rate is initialized to 1e-3 and decays by a factor of 0.5 every 20 epochs up to 100 epochs. and For all models, we set the initial learning rate to 1e-3 and decay it by a factor of 0.5 every 5 epochs.