On-the-fly Synthesis for LTL over Finite Traces
Authors: Shengping Xiao, Jianwen Li, Shufang Zhu, Yingying Shi, Geguang Pu, Moshe Vardi6530-6537
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compared our new approach with the traditional ones on extensive LTLf synthesis benchmarks. Experimental results showed that the pre-processing techniques have a significant advantage on the synthesis performance in terms of scalability, and the on-the-fly synthesis is able to complement extant approaches on both realizable and unrealizable cases. |
| Researcher Affiliation | Collaboration | Shengping Xiao1, Jianwen Li1 , Shufang Zhu2,3, Yingying Shi1, Geguang Pu1 and Moshe Vardi4 1East China Normal University, 2Sapienza Universit a di Roma, 3Shanghai Trusted Industrial Control Platform Co., Ltd, 4Rice University |
| Pseudocode | Yes | Algorithm 1 Synthesis: Compute the winning strategy on the fly; Algorithm 2 Implementation of function get Transition |
| Open Source Code | No | The paper states 'We implemented both the pre-processing and on-thefly synthesis techniques in the tool OLFS.' but does not provide any link or explicit statement about the public availability of the source code. |
| Open Datasets | Yes | Benchmarks We benchmarked the experiment with the 430 instances presented in (Bansal et al. 2020). We also select two classes of patterns U and GF, which originate from (Rozier and Vardi 2007) and are shown in Figure 2 and Figure 3 respectively, to test the efficiency of pre-processing techniques. |
| Dataset Splits | No | The paper mentions using 430 instances from (Bansal et al. 2020) and patterns from (Rozier and Vardi 2007), but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | Yes | Platform We ran the experiments on a Red Hat 6.0 cluster with 2304 processor cores in 192 nodes (12 processor cores per node), running at 2.83 GHz with 48GB of RAM per node. Each tool executed on a dedicated node with a timeout of 10 minutes, measuring execution time with Unix time. |
| Software Dependencies | No | The paper mentions comparing with 'SYFT (Zhu et al. 2017) and LISASYNT (Bansal et al. 2020)' and that 'All three tools were run with their default parameters', but it does not provide specific version numbers for these tools or any other software dependencies. |
| Experiment Setup | No | The paper states 'All three tools were run with their default parameters' and specifies a 'timeout of 10 minutes', but it does not provide specific experimental setup details such as hyperparameters, model initialization, or training schedules. |