Representing and Learning Grammars in Answer Set Programming

Authors: Mark Law, Alessandra Russo, Elisa Bertino, Krysia Broda, Jorge Lobo2919-2928

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section summarises experimental results of using our approach to induce ASGs. The approach was evaluated on several context-sensitive languages, including some languages drawn from a related paper targeting learning mildly context-sensitive (MCS) languages represented as linear indexed grammars (LIGs) (Nakamura and Imada 2011). Table 3: A summary of the results of our evaluation. Final Time and Total Time show the learning time (on an Ubuntu 14.04 desktop machine with a 3.4 GHz Intel R Core TM i73770 processor and 16GB RAM) taken in the final iteration and the total learning time, respectively. |E+| and |E | show the number of positive and negative examples needed to learn the target language in each case. |SM| is the number of rules in the hypothesis space.
Researcher Affiliation Academia Mark Law Imperial College London, UK mark.law09@imperial.ac.uk Alessandra Russo Imperial College London, UK a.russo@imperial.ac.uk Elisa Bertino Purdue University, USA bertino@purdue.edu Krysia Broda Imperial College London, UK k.broda@imperial.ac.uk Jorge Lobo ICREA Universitat Pompeu Fabra jorge.lobo@upf.edu
Pseudocode No The paper does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions and cites the ILASP system (Law, Russo, and Broda 2015a) with a link to its project page: "Law, M.; Russo, A.; and Broda, K. 2015a. The ILASP system for learning answer set programs. https://www.doc.ic.ac.uk/ ml1909/ ILASP." However, this is a third-party tool used by the authors, not the source code for the ASG methodology or prototype implementation described in this paper.
Open Datasets No The paper describes abstract language examples (e.g., "The copy language: ww", "The language anbncn") and defines sets of positive and negative strings (E+ and E-) for the learning task. However, it does not provide concrete access information (links, citations with authors/year, or repository names) for these datasets, as they appear to be synthetically generated or defined by the grammar structure rather than external, publicly hosted datasets.
Dataset Splits No The paper describes an iterative learning approach where counterexamples are identified and added to the learning task. It does not specify fixed training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility in a traditional machine learning sense. The concept of a distinct 'validation' set is not mentioned.
Hardware Specification Yes Table 3 states: "(on an Ubuntu 14.04 desktop machine with a 3.4 GHz Intel R Core TM i73770 processor and 16GB RAM)".
Software Dependencies No The paper mentions using the "ILASP (Inductive Learning of Answer Set Programs) system" but does not specify its version number or any other software dependencies (e.g., programming language versions, specific libraries, or solvers with their versions) that would be needed for reproducibility.
Experiment Setup No The paper describes an "iterative approach" to learning, where counterexamples are added, and bounds are placed on the depth of parse trees. However, it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes), specific optimizer settings, or other concrete configuration parameters typically found in experimental setups for models.