SAT-Based PAC Learning of Description Logic Concepts

Authors: Balder ten Cate, Maurice Funk, Jean Christoph Jung, Carsten Lutz

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate SPELL on several datasets and compare it to the only other available learning system for EL that we are aware of, the EL tree learner (ELTL) incarnation of the DL-Learner system [B uhmann et al., 2016]. We find that the running time of SPELL is almost always significantly lower than that of ELTL.
Researcher Affiliation Academia Balder ten Cate1 , Maurice Funk2,3 , Jean Christoph Jung4 and Carsten Lutz2,3 1ILLC, University of Amsterdam 2Leipzig University 3Center for Scalable Data Analytics and Artificial Intelligence (Sca DS.AI) 4TU Dortmund University
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes We implemented bounded fitting for the OMQ language (ELHr, ELQ) in the system SPELL (for SAT-based PAC EL concept Learner).2 ... 2Available at https://github.com/spell-system/SPELL.
Open Datasets Yes The first experiment uses the Yago 4 knowledge base which combines the concept classes of schema.org with data from Wikidata [Tanon et al., 2020].
Dataset Splits No We again use the Yago benchmark, but now split the examples into training data and testing data (assuming a uniform probability distribution). This only mentions a split into training and testing but does not provide specific percentages, sample counts, or detailed splitting methodology like cross-validation.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, specific processors, or memory amounts) used for running the experiments were provided in the paper. It only mentions 'running time' without specifying the computational environment.
Software Dependencies Yes SPELL is implemented in Python 3 and uses the Py Sat library to interact with the Glucose SAT solver.
Experiment Setup No The paper discusses varying the number of labeled examples and the size of target ELQs, and mentions that ELTL might prefer fittings of smaller size due to some heuristics. However, it does not provide specific hyperparameter values, training configurations, or detailed system-level settings for SPELL or the experiments in general that would allow full reproduction of the setup beyond dataset and target query characteristics.