Learning Small Decision Trees with Few Outliers: A Parameterized Perspective

Authors: Harmender Gahlawat, Meirav Zehavi

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We first establish that both DTSO and DTDO are W[1]-hard when parameterized by s+δmax and d+δmax, respectively... We complement this result by showing that these problems become FPT if we include the parameter t. We also consider the kernelization complexity of these problems and establish several positive and negative results for both DTSO and DTDO.
Researcher Affiliation Academia Harmender Gahlawat* and Meirav Zehavi* Ben-Gurion University of the Negev, Beersheba, Israel.
Pseudocode No The paper describes algorithmic ideas in prose, such as 'we extend this result by establishing that we can enumerate all minimal support sets...', but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described, nor does it state that code is available.
Open Datasets No The paper discusses theoretical properties of decision tree learning on a 'classification instance (CI) E', which is a theoretical construct, and does not refer to any specific publicly available or open datasets for training or evaluation.
Dataset Splits No The paper focuses on theoretical analysis and does not describe any experimental setup involving dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical in nature and does not describe any computational experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper does not describe any software dependencies or specific version numbers, as it focuses on theoretical analysis rather than implementation or experimentation details.
Experiment Setup No The paper focuses on theoretical complexity results and does not describe any experimental setup details, such as hyperparameters or training configurations.