On Preferred Abductive Explanations for Decision Trees and Random Forests
Authors: Gilles Audemard, Steve Bellart, Louenas Bounia, Frederic Koriche, Jean-Marie Lagniez, Pierre Marquis
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we present the results of an empirical evaluation illustrating the benefits that can be achieved in practice by leveraging a model of the user preferences in the computation of majoritary reasons. and Experimental setup. We have considered 22 datasets for binary classification, which are standard benchmarks from the repositories Kaggle (www.kaggle.com), Open ML (www.openml.org), and UCI (archive.ics.uci.edu/ml/), and we have learned random forests from them. |
| Researcher Affiliation | Academia | Gilles Audemard1 , Steve Bellart1 , Louenas Bounia1 , Frederic Koriche1 , Jean-Marie Lagniez1 and Pierre Marquis1,2 1Univ. Artois, CNRS, Centre de Recherche en Informatique de Lens (CRIL), F-62300 Lens, France 2Institut Universitaire de France {audemard,bellart,bounia,koriche,lagniez,marquis}@cril.fr |
| Pseudocode | No | The paper describes algorithms verbally and through propositions but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | A full-proof version of the paper is available at www.cril.univ-artois.fr/expekctation/papers.html. This link points to a PDF, not source code. It also mentions "wordfreq library (pypi.org/project/wordfreq/)" and "Scikit-Learn library [Pedregosa et al., 2011]" and "SHAP (shap.readthedocs.io/ en/latest/api.html)" which are third-party tools, not their own code. There's no statement about their own code. |
| Open Datasets | Yes | We have considered 22 datasets for binary classification, which are standard benchmarks from the repositories Kaggle (www.kaggle.com), Open ML (www.openml.org), and UCI (archive.ics.uci.edu/ml/), and we have learned random forests from them. Some of these datasets are listed in Table 1. |
| Dataset Splits | Yes | For each dataset b, a 10-fold cross validation process has been achieved. |
| Hardware Specification | Yes | All the experiments have been conducted on a computer equipped with Intel(R) Core(TM) i9-9900 CPU @ 3.10GHz 16 cores and 64 GB of memory. |
| Software Dependencies | No | version 0.23.2 of the Scikit-Learn library [Pedregosa et al., 2011] and openwbo [Martins et al., 2014]. While Scikit-Learn has a version, openwbo, a key solver, does not have its version explicitly stated. |
| Experiment Setup | Yes | All hyper-parameters of the learning algorithm have been set to their default value, except the number of trees. This parameter has been tuned to ensure that the accuracy of the forest is good enough. and In the computation of random forests, categorical features have been treated as arbitrary numbers. Numeric features have been binarized on-the-fly by the random forest learning algorithm we used... |