Learning Regular Languages via Alternating Automata
Authors: Dana Angluin, Sarah Eisenstat, Dana Fisman
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We tested all four algorithms L , NL , UL and AL on randomly generated DFAs, NFAs, UFAs and AFAs. The results show, that for X 2 {D, N, U, A}, roughly speaking, XL out-performs the others on randomly generated X-FAs. There is a clear advantage of AL over the others in term of the number of states of the learned automaton, and an advantage of L in terms of the number of equivalence queries. |
| Researcher Affiliation | Academia | Dana Angluin Yale University Sarah Eisenstat Massachusetts Institute of Technology Dana Fisman University of Pennsylvania |
| Pseudocode | Yes | Algorithm 1: XL for X 2 {D, N, U, A} |
| Open Source Code | No | The paper does not contain any statements about releasing code for the described methodology or provide links to a code repository. |
| Open Datasets | No | The paper states that the experiments were conducted on 'randomly generated DFAs, NFAs, UFAs and AFAs'. It does not provide access information (link, citation, or repository) for these generated datasets, nor does it refer to pre-existing public datasets. |
| Dataset Splits | No | The paper describes how equivalence queries were implemented ('randomly querying 10,000 membership queries') which acts as evaluation. However, it does not specify traditional dataset splits (e.g., 80/10/10 split) for training, validation, or testing in the typical supervised learning context, as the algorithms learn interactively from queries. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions implementing the algorithmic scheme XL and its sub-procedures but does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or specific solvers). |
| Experiment Setup | Yes | We have tested the four resulting algorithms L , NL , UL and AL on four sets of randomly generated automata; in each case the alphabet size was 3 and the number of states was as indicated. The first set consists of randomly generated DFAs of size 1 to 100. The second set consists of randomly generated NFAs of size 10. The third set consists of randomly generated UFAs of size 10. The forth set consists of randomly generated AFAs of size 7. Equivalence queries were implemented by randomly querying 10,000 membership queries. Lengths were uniformly chosen from 0 to t + 2 where t is the number of states in the target XFA. For AFA targets the initial condition and transition relation chose randomly a formula with one alternation of disjunctions and conjunctions. |