STLnet: Signal Temporal Logic Enforced Multivariate Recurrent Neural Networks

Authors: Meiyi Ma, Ji Gao, Lu Feng, John Stankovic

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of STLnet using large-scale real-world city data. The experimental results show STLnet not only improves the accuracy of predictions, but importantly also guarantees the satisfaction of model properties and increases the robustness of RNNs.
Researcher Affiliation Academia Meiyi Ma, Ji Gao, Lu Feng, John Stankovic University of Virginia {meiyi,jg6yd,lu.feng,stankovic}@virginia.edu
Pseudocode Yes Algorithm 1 Converting STL to DNF with Calculation of Satisfaction Range
Open Source Code No No explicit statement about providing open-source code or a direct link to a code repository for the described methodology was found.
Open Datasets Yes The dataset includes 1.3 million instances of 6 pollutants (i.e., PM2.5, PM10, CO, SO2, NO2, O3) collected from 130 locations in Beijing every hour between 5/1/2014 and 4/30/2015 [15].
Dataset Splits No The paper discusses "training phase" and "testing phase" but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit cross-validation setup).
Hardware Specification Yes The experiments are evaluated on a server machine with 20 CPUs, each core is 2.2GHz, and 4 Nvidia Ge Force RTX 2080Ti GPUs. The operating system is Centos 7.
Software Dependencies No The paper refers to the use of LSTM and Transformer networks, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper states that it applies STLnet to LSTM and Transformer networks for multivariate sequential prediction, and mentions some general setup like concatenating variables. However, it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs), optimizer settings, or other training configurations.