On Explaining Random Forests with SAT

Authors: Yacine Izza, Joao Marques-Silva

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results, obtained on a wide range of publicly available datasets, demonstrate that the proposed SAT-based approach scales to RFs of sizes common in practical applications.
Researcher Affiliation Academia 1University of Toulouse, France 2IRIT, CNRS, Toulouse, France
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement or link indicating that the authors are releasing the source code for the methodology described in this paper.
Open Datasets Yes The assessment is performed on a selection of 32 publicly available datasets, which originate from UCI Machine Learning Repository [Dua and Graff, 2017] and Penn Machine Learning Benchmarks [Olson et al., 2017].
Dataset Splits Yes When training RF classifiers for the selected datasets, we used 80% of the dataset instances (20% used for test data).
Hardware Specification Yes The experiments are conducted on a Mac Book Pro with a Dual-Core Intel Core i5 2.3GHz CPU with 8GByte RAM running mac OS Catalina.
Software Dependencies No The paper mentions 'scikit-learn ML tool' and 'Py SAT [Ignatiev et al., 2018]' but does not provide specific version numbers for these software components.
Experiment Setup Yes The number of trees in each RF is set to 100 while tree depth varies between 3 and 8.