LTLf Synthesis as AND-OR Graph Search: Knowledge Compilation at Work
Authors: Giuseppe De Giacomo, Marco Favorito, Jianwen Li, Moshe Y. Vardi, Shengping Xiao, Shufang Zhu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implemented our approach in a tool called Cynthia and conducted comprehensive experiments by comparing to existing LTLf synthesis tools, including Lisa, Lydia and Ltlfsyn from [Xiao et al., 2021] and demonstrate the merits of our approach. |
| Researcher Affiliation | Collaboration | 1Sapienza University of Rome 2Banca d Italia 3East China Normal University 4Rice University |
| Pseudocode | Yes | The paper includes clearly labeled algorithm blocks: 'Algorithm 1 SDD-based Forward Synthesis', 'Algorithm 2 Propagate Success Backwards', and 'Algorithm 3 SDD-based Expand Graph from An Or Node'. |
| Open Source Code | Yes | 1Tool available at https://whitemech.github.io/cynthia. |
| Open Datasets | Yes | We collected, in total, 1494 LTLf synthesis instances from literature: 40 Patterns instances [Xiao et al., 2021]; 54 Two-player-Games instances [Tabajara and Vardi, 2019; Bansal et al., 2020]; 1400 Random instances [Zhu et al., 2017; De Giacomo and Favorito, 2021]. |
| Dataset Splits | No | The paper mentions using LTLf synthesis instances as benchmarks for evaluation, but it does not specify any training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | Yes | Experiments were run on a computer cluster, where each instance took exclusive access to a computing node with Intel-Xeon processor running at 2.6 GHz, with 8GB of memory and 300 seconds of time limit. |
| Software Dependencies | Yes | We make use of library SDD-2.0 (http://reasoning.cs.ucla.edu/sdd) to handle all SDD related operations. |
| Experiment Setup | No | The paper describes the algorithms and their implementation details, and mentions some optimizations ('pre-processing techniques', 'look-ahead check'), but it does not provide specific hyperparameter values, training configurations, or other system-level settings for the experiments. |