Adaptive Reactive Synthesis for LTL and LTLf Modulo Theories

Authors: Andoni Rodríguez, César Sánchez

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical Evaluation Demonstration. We first illustrate using the running example ϕT how the on-the-fly approach behaves in practise. Specification ϕT is unrealizable for TZ, but a slight modification makes it realizable. If we replace (y < x) with (y x) we obtain ϕ T = (R0 R 1), where: R0 : (x < 2) (y > 1) R 1 : (x 2) (y x) Specification ϕ T is realizable in TZ (consider the strategy of the system to always play y = 2). The Booleanized version of ϕ T is ϕ B = ϕ ([(e0 e1) (e0 e1)] ϕextra ), where ϕ = (s0 s1) ( s0 s2) and ϕextra is: e0 s012 s012 e1 s012 s012 s012 where e0, e1 B belong to the environment and represent (x < 2) and (x 2), respectively. Note that in ϕ B there are no separated ek for (x = 2) and (x > 2). Also, note that a complete abstraction would also produce a decision e1.5 with choices {s012, s012, s012} that represents (x < 1), but an intelligent environment will never play e1.5, since it offers strictly more power to the system (more choices), so e0 already characterizes those plays. All choices from s012 to s012 have the same meaning as in ϕT . We show a concrete execution in Tab. 1, where we see how the T -controller responds to a few T -inputs.
Researcher Affiliation Collaboration Andoni Rodr ıguez1,2 and C esar S anchez1 1IMDEA Software Institute 2Universidad Polit ecnica de Madrid {andoni.rodriguez,cesar.sanchez}@imdea.org
Pseudocode Yes Algorithm 1: Exhaustive get E(). 1 Input: v, VR, dec ls 2 S = VR 3 while S = do 4 r S, pos get Pos(r, VR) 5 ψ subst(r, v) 6 if ψ is satisfied then 7 return e dec ls(pos) 8 S S \ {r} 9 return Error
Open Source Code No The paper states: "All experiments can be replicated for the corresponding LTLf benchmarks, except that we would need to use another more appropriate synthesis engine for such fragment (e.g. Nike by (Favorito 2023))." It also mentions using "Strix by (Meyer, Sickert, and Luttenberger 2018) as the synthesis engine and aigsim.c to simulate the controller with stimulous." but does not provide a link to their own implementation code.
Open Datasets No The paper mentions using "six specifications based on real industrial specifications: Cooker (Coo.) and Usb (Usb.) are original benchmarks from (Rodriguez and S anchez 2023), whereas Lift (Li.), Train (Tr.), Connect (Con.) and Stage (St.) are modifications of original specifications that were unrealizable to make them realizable." While citing a paper for some benchmarks, it does not provide concrete access information (link, DOI, repository) for these specifications or any specific datasets used in their experiments.
Dataset Splits No The paper describes running input-output simulations with a certain number of timesteps (e.g., 1000 or 10,000) and using stimuli. However, it does not specify training, validation, or test dataset splits in terms of percentages, sample counts, or references to predefined splits for model training and evaluation.
Hardware Specification Yes We ran the experiments on a Mac Book Air 12.4 with the M1 processor and 16 GB of memory.
Software Dependencies Yes We used used Python 3.8.8 with Z3 4.12.2 for the implementation. We used Strix by (Meyer, Sickert, and Luttenberger 2018) as the synthesis engine and aigsim.c to simulate the controller with stimulous.
Experiment Setup No The paper describes the overall on-the-fly synthesis architecture and its components (Partitioner, Controller, Provider) and discusses adaptivity. However, it does not provide specific experimental setup details such as hyperparameters, learning rates, batch sizes, epochs, or other detailed training configurations that are typically found in empirical studies involving model training.