Teaching LTLf Satisfiability Checking to Neural Networks

Authors: Weilin Luo, Hai Wan, Jianfeng Du, Xiaoda Li, Yuze Fu, Rongzhen Ye, Delong Zhang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that LTLf Net achieves good performance in synthetic datasets and generalizes across large-scale datasets. They also show that LTLf Net is competitive with state-of-the-art symbolic approaches such as nu Xmv and CDLSC.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2Key Laboratory of Machine Intelligence and Advanced Computing (Sun Yat-sen University), Ministry of Education, China 3Guangdong University of Foreign Studies, Guangzhou, China 4Pazhou Lab, Guangzhou, China {luowlin3,lixd36,fuyz,yerzh,zhangdlong3}@mail2.sysu.edu.cn, wanhai@mail.sysu.edu.cn, jfdu@gdufs.edu.cn
Pseudocode Yes Algorithm 1: LTLFEMBEDDING, Algorithm 2: COMBINE
Open Source Code Yes Our code and benchmarks are publicly available at https://github.com/wanderer0205/LTLf Net.
Open Datasets No The paper states: 'In order to generate synthetic datasets, we used the randltl tool in the SPOT framework2 to generate random formulae.' and lists large-scale datasets like 'LTL-as-LTLf', 'LTLf-Specific', 'NASA-Boeing', and 'DECLARE'. However, it does not provide direct links, DOIs, specific repository names, or formal citations for public access to these datasets themselves.
Dataset Splits Yes The synthetic datasets include a training set with 16K formulae and a validation set with 2K formulae. Besides, we generate formulae and divide them into 6 test sets according to their size intervals: [20, 100), [100, 120), [120, 140), [140, 160), [160, 180), and [180, 200). For each test set, we randomly generate 2K formulae.
Hardware Specification Yes All experiments were conducted on a single GPU (NVIDIA A100).
Software Dependencies No The paper mentions using the 'Adam optimizer' and 'Cross Entropy Loss' but does not specify version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For Transformer, the number of heads is 4, the number of layers is 3, and the dimension of hidden layers is 256 in the encoder layer. The dimension of embeddings dm is 1024 and the classification layer is an MLP. The batch size is 512, the learning rate is 1e 5, and the number of training epochs is 1000 with early stopping when the loss does not decrease for 30 epochs. For LTLf Net and Tree NN, the dimension of embeddings dm is 1024 and the classification layer is an MLP. The batch size is 128, the learning rate is 1e 3, and the number of training epochs is 256 with early stopping when the loss does not decrease for 30 epochs. For RGCN, the dimension of embeddings dm is 1024, the dimension of the hidden layer in R-GCN is 32, and the classification layer is a two-layer MLP with the dimension of the hidden layer 128. The batch size is 128, the learning rate is 1e 3, and the number of training epochs is 256 with early stopping when the loss does not decrease for 30 epochs.