A General Theoretical Framework for Learning Smallest Interpretable Models

Authors: Sebastian Ordyniak, Giacomo Paesani, Mateusz Rychlicki, Stefan Szeider

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data.
Researcher Affiliation Academia 1University of Leeds, Leeds, UK, 2TU Wien, Vienna, Austria
Pseudocode Yes Algorithm 1: Generic Algorithm for finding a minimum model of size at most s for any strongly extendable modeltype T. Algorithm 2: Generic Algorithm for finding a minimum model of size at most s for any extendable model-type T. Algorithm 3: Algorithm for finding a full set of strict extensions for DLs.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper is a theoretical work and does not use or refer to specific datasets for empirical evaluation.
Dataset Splits No The paper is a theoretical work and does not describe any training, validation, or test dataset splits.
Hardware Specification No The paper describes a theoretical framework and algorithms, and therefore does not specify any hardware used for experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or system-level training settings.