On the Computation of Example-Based Abductive Explanations for Random Forests

Authors: Gilles Audemard, Jean-Marie Lagniez, Pierre Marquis, Nicolas Szczepanski

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments in the case of random forest classifiers show that our CEGAR-based algorithm is quite efficient in practice.
Researcher Affiliation Academia 1Univ. Artois, CNRS, CRIL, F-62300 Lens, France 2Institut Universitaire de France {audemard, lagniez, marquis, szczepanski}@cril.fr
Pseudocode No The paper describes the algorithmic steps and components (e.g., 'Our approach relies on a two-phase procedure', 'Our algorithm is based on linear search') but does not include a formally labeled 'Algorithm' or 'Pseudocode' block.
Open Source Code Yes Additional empirical results and the code used in our experiments are also furnished in this supplementary material.
Open Datasets Yes We have focused on 14 datasets issued from three well-known repositories, namely Open ML1 (openml.org), UCI2 (archive.ics.uci.edu/ml/), and UCR3 (timeseriesclassification.com).
Dataset Splits Yes A 10-fold cross validation process has been achieved.
Hardware Specification Yes All the experiments have been conducted on a computer equipped with Intel(R) XEON E5-2637 CPU @ 3.5 GHz and 128 Gib of memory.
Software Dependencies No The paper mentions 'scikit-learn' and 'glucose' but does not specify their version numbers.
Experiment Setup Yes All the hyperparameters have been set to their default values (100 trees per forest).