Learning Residual Alternating Automata

Authors: Sebastian Berndt, Maciej Li_kiewicz, Matthias Lutter, RŸdiger Reischuk

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper we disprove this conjecture by constructing a counterexample. As our main positive result we design an efficient learning algorithm, named AL , and give a proof that it outputs residual AFAs only. In addition, we investigate the succinctness of these different FA types in more detail.
Researcher Affiliation Academia Sebastian Berndt, Maciej Li skiewicz, Matthias Lutter, R udiger Reischuk Institute for Theoretical Computer Science, University of L ubeck Ratzeburger Allee 160, 23562 L ubeck, Germany {berndt,liskiewi,lutter,reischuk}@tcs.uni-luebeck.de
Pseudocode Yes Algorithm 1: AL for the target language L.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets No The paper discusses learning algorithms for abstract 'regular languages' but does not specify or provide access information for any concrete, publicly available datasets used for training or evaluation.
Dataset Splits No The paper generally discusses concepts of training, validation, and test sets in the context of learning algorithms, but does not provide specific dataset split information (percentages, sample counts, or explicit splitting methodology) for any experiments conducted by the authors.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running any experiments.
Software Dependencies No The paper mentions 'specially designed software tools' but does not provide specific ancillary software details, such as library or solver names with version numbers.
Experiment Setup No The paper does not contain specific experimental setup details, concrete hyperparameter values, training configurations, or system-level settings, as it is primarily a theoretical paper.