Best-Effort Synthesis: Doing Your Best Is Not Harder Than Giving Up

Authors: Benjamin Aminof, Giuseppe De Giacomo, Sasha Rubin

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study best-effort synthesis under environment assumptions specified in LTL, and show that this problem has exactly the same computational complexity of standard LTL synthesis: 2EXPTIMEcomplete. We provide optimal algorithms for computing best-effort strategies, both in the case of LTL over infinite traces and LTLf specifications on finite traces (i.e., LTLf). The main result of this paper is that best-effort synthesis under environment assumptions has exactly the same computational complexity as standard synthesis: 2EXPTIMEcomplete. Our algorithm employs a novel technique based on simultaneously playing multiple games with different objectives on the same arena.
Researcher Affiliation Academia Benjamin Aminof1 , Giuseppe De Giacomo2 and Sasha Rubin3 1TU Vienna, Austria 2University of Rome La Sapienza , Italy 3University of Sydney, Australia
Pseudocode Yes Algorithm A. Given LTL formulas ϕ and E: 1. For every ξ { E, E ϕ, E ϕ} compute the DPAs Aξ = (Dξ, cξ) (Theorem 4). 2. Form the product D = D E DE ϕ DE ϕ, and denote the lifted priority functions on D by d E, d E ϕ and d E ϕ (Definitions 4 and 5). 3. In the DPA-game on (D, d E ϕ), compute a uniform positional winning agent strategy fag for d E ϕ (Theorem 5). Let W Q be the agent s winning region. 4. In the DPA-game on (D, d E), compute the environment s winning region V Q (Theorem 5). Thus, the environment can ensure E holds from every state in V . 5. Compute the environment-restriction of D to the set V , and call the resulting transition system D (Definition 6). Form the DPA-game (D , d E ϕ) where the parity of the new sink state is given an odd color smaller than all even colors in D. 6. In the DPA-game (D , d E ϕ), find a uniform positional strategy gag that is co-operatively winning (Theorem 6). 7. Return the agent strategy induced by the positional strategy kag : Q 2X 2Y : kag(q, X ) = fag(q, X ) if q W, and kag(q, X ) = gag(q, X ) otherwise.
Open Source Code No The paper does not provide any explicit statements about open-source code availability or links to code repositories for the described methodology.
Open Datasets No The paper is theoretical and does not involve empirical experiments with datasets. Therefore, it does not mention public or open datasets.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets. Therefore, it does not provide information about training/validation/test splits.
Hardware Specification No The paper is theoretical and does not describe any empirical experiments. Therefore, it does not mention hardware specifications used for running experiments.
Software Dependencies No The paper is theoretical and does not describe any empirical experiments. Therefore, it does not mention specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithms and complexity. It does not describe an empirical experimental setup, hyperparameters, or training settings.