Temporal Logic Point Processes

Authors: Shuang Li, Lu Wang, Ruizhi Zhang, Xiaofu Chang, Xuqin Liu, Yao Xie, Yuan Qi, Le Song

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrated the interpretability, prediction accuracy, and flexibility of our proposed temporal logic point processes (TLPP) on both synthetic (including Hawkes processes and self-correcting point process) and real data (including healthcare application about sepsis patients mortality prediction and finance application about credit card fraud event prediction).
Researcher Affiliation Collaboration 1Department of Statistics, Harvard University; 2Department of Computer Science, East China Normal University; 3Department of Statistics, University of Nebraska-Lincoln; 4Ant Group; 5H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology; 6School of Computational Science & Engineering, Georgia Institute of Technology.
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code No The paper does not contain an explicit statement about the release of its source code or a link to a code repository for the methodology described.
Open Datasets Yes MIMIC-III is an electronic health record dataset of patients admitted to the intensive care unit (Johnson et al., 2016). We used a credit card dataset from the UCSD-FICO Data Mining Contest"(FICO-UCSD., 2009) to detect fraud transactions. (FICO-UCSD., 2009. URL https://ebiquity.umbc.edu/blogger/2009/05/24/ucsd-data-mining-contest/.)
Dataset Splits No The paper mentions training data sizes (50, 500, and 4,000 patients) and test data sizes (100 patients), but does not explicitly describe a separate validation set or its split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes We considered a one-layer RNN and LSTM with 128 hidden units. ... For our model, weights of logic rules were initialized as a small number, say, .001 (like the standard initialization for neural networks). To ensure non-negative weights, projected gradient descent was used in training.