Learning Interpretable Temporal Properties from Positive Examples Only
Authors: Rajarshi Roy, Jean-Raphaƫl Gaglione, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To assess the effectiveness of our algorithms, we evaluate them on a few practical case studies. |
| Researcher Affiliation | Academia | Rajarshi Roy1, Jean-Rapha el Gaglione2, Nasim Baharisangari3, Daniel Neider4, 5 , Zhe Xu3, Ufuk Topcu2 1 Max Planck Institute for Software Systems, Kaiserslautern, Germany 2 University of Texas at Austin, Texas, USA 3 Arizona State University, Arizona, USA 4 TU Dortmund University, Dortmund, Germany 5 Center for Trustworthy Data Science and Security, University Alliance Ruhr, Germany |
| Pseudocode | Yes | Algorithm 1: Symbolic Algorithm for Learning DFA |
| Open Source Code | Yes | 1https://github.com/cryhot/samp2symb/tree/paper/posdata |
| Open Datasets | No | The paper describes generating datasets for its experiments (e.g., 'generated a set of 1000 positive words', 'generated sample words', 'used 10000 words clustered'), but does not provide concrete access information (link, DOI, repository, or formal citation to a public dataset) for these specific datasets. |
| Dataset Splits | No | The paper mentions using 'positive words' or 'sample words' for learning but does not provide specific details on how the data was split into training, validation, and testing sets or use phrases like '80/10/10 split' or 'k-fold cross-validation'. |
| Hardware Specification | Yes | Overall, we ran all the experiments using 8 Gi B of RAM and two CPU cores with a clock speed of 3.6 GHz. |
| Software Dependencies | No | The paper mentions 'Python 3', 'Py SAT', and 'clingo' but does not provide specific version numbers for these software dependencies to ensure reproducibility. |
| Experiment Setup | Yes | For both algorithms, we set the maximum formula size n = 10 and a timeout of TO = 1000s. For S-SYMLTL, we additionally set the time horizon K = 8. [...] We ran algorithms CEGDFA and SYMDFA with a timeout TO = 1000s, and for n up to 10. [...] First, in every algorithm, we learned models in an incremental manner, i.e., we started by learning DFAs (resp. LTLf formulas) of size 1 and then increased the size by 1. |