Bridging LTLf Inference to GNN Inference for Learning LTLf Formulae

Authors: Weilin Luo, Pingjia Liang, Jianfeng Du, Hai Wan, Bo Peng, Delong Zhang9849-9857

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate GLTLf on various datasets with noise. Our experimental results confirm the effectiveness of GNN inference in learning LTLf formulae and show that GLTLf is superior to the state-of-the-art approaches.
Researcher Affiliation Academia School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China Key Laboratory of Machine Intelligence and Advanced Computing (Sun Yat-sen University), Ministry of Education, China Guangzhou Key Laboratory of Multilingual Intelligent Processing, Guangdong University of Foreign Studies, Guangzhou, China Pazhou Lab, Guangzhou, China
Pseudocode Yes Algorithm 1: Interpreting LTLf Formulae
Open Source Code Yes Our code, benchmarks and technical report are publicly available at https://github.com/a79461378945/Bridging-LTLf Inference-to-GNN-Inference-for-Learning-LTLf-Formulae.
Open Datasets No We used randltl tool of SPOT (Duret-Lutz et al. 2016) to generate random formulae. The paper states they generated their own datasets but provides no direct link, DOI, or specific repository name for these generated datasets. The citation is for the tool used, not the specific generated datasets themselves.
Dataset Splits No For each dataset, we generated a formula of which has kf sub-formulae of non-atomic propositions and then we randomly generated 250/250 positive/negative traces of this formula as the training set, 500/500 positive/negative traces of this formula as the test set.
Hardware Specification Yes All experiments are run on a Linux equipped with an Intel(R) Xeon(R) Gold 5218R processor with 2.1 GHz and 126 GB RAM.
Software Dependencies No The paper mentions using 'Adam' for optimization but does not provide specific version numbers for any software libraries, frameworks, or solvers used in the experiments.
Experiment Setup Yes Hyperparameters of GLTLf are set as follows: θik = 100, kg {3, 6, 9, 12, 15} to study the robustness of kg, α = 0.1, β = 0.1, and γ = 10. We use Adam (Kingma and Ba 2015) to optimize the parameters in our model. The learning rate is set to 1e 4. The number of epochs is 400 and the batch size is 10.