Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Complexity of Equivalence and Learning for Multiplicity Tree Automata
Authors: Ines Marusic, James Worrell
JMLR 2015 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We consider the query and computational complexity of learning multiplicity tree automata in Angluin s exact learning model. Motivated by this feature, we first characterise the complexity of the equivalence problem for multiplicity tree automata, showing that it is logspace equivalent to polynomial identity testing. We then move to query complexity, deriving lower bounds on the number of queries needed to learn multiplicity tree automata over both fixed and arbitrary fields. [...] We give a new learning algorithm for multiplicity tree automata in which counterexamples to equivalence queries are represented as DAGs. The query complexity of this algorithm is quadratic in the target automaton size and linear in the size of a largest counterexample. |
| Researcher Affiliation | Academia | Ines Maruˇsi c EMAIL James Worrell EMAIL Department of Computer Science University of Oxford Parks Road, Oxford OX1 3QD, UK |
| Pseudocode | Yes | Algorithm LMTA Target: f Rec(Σ, F), where Σ has rank m and F is a field 1. Make an equivalence query on the 0-dimensional F-MTA over Σ. If the answer is YES then output the 0-dimensional F-MTA over Σ and halt. Otherwise the answer is NO and z is a counterexample. Initialise: n 1, tn z, X {tn}, Y {2}. 2. 2.1. For every k {0, . . . , m}, σ Σk, and (i1, . . . , ik) [n]k: If Hσ(ti1,...,tik),Y is not a linear combination of Ht1,Y , . . . , Htn,Y then n n + 1, tn σ(ti1, . . . , tik), X X {tn}. 2.2. Define an F-MTA A = (n, Σ, µ, γ) as follows: γ = HX,2. For every k {0, . . . , m} and σ Σk: Define matrix µ(σ) Fnk n by the equation µ(σ) HX,Y = Hσ(X,...,X),Y . (12) 3. 3.1. Make an equivalence query on A. If the answer is YES then output A and halt. Otherwise the answer is NO and z is a counterexample. Searching bottom-up, find a subtree σ(τ1, . . . , τk) of z that satisfies the following two conditions: (i) For every j [k], Hτj,Y = µ(τj) HX,Y . (ii) For some c Y , Hσ(τ1,...,τk),c = µ(σ(τ1, . . . , τk)) HX,c. 3.2. For every j [k] and (i1, . . . , ij 1) [n]j 1: Y Y {c[σ(ti1, . . . , tij 1, 2, τj+1, . . . , τk)]}. 3.3. For every j [k]: If Hτj,Y is not a linear combination of Ht1,Y , . . . , Htn,Y then n n + 1, tn τj, X X {tn}. 3.4. Go to 2. Table 2: Exact learning algorithm LMTA for the class of multiplicity tree automata |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating the release of source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments requiring empirical datasets. Therefore, it does not provide access information for any open datasets. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with datasets, thus no dataset splits are discussed. |
| Hardware Specification | No | The paper is theoretical and focuses on computational complexity and algorithm design, not empirical evaluations, thus it does not specify any hardware. |
| Software Dependencies | No | The paper is theoretical and does not describe software implementations or dependencies with specific version numbers for experiments. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and complexity, not the setup of empirical experiments or hyperparameter tuning. |