FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles

Authors: Ana Lucic, Harrie Oosterhuis, Hinda Haned, Maarten de Rijke5313-5322

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We consider 42 experimental settings to find the best counterfactual explanations using FOCUS. ... We evaluate FOCUS on four binary classification datasets: Wine Quality (UCI 2009), HELOC (FICO 2017), COMPAS (Ofer 2017), and Shopping (UCI 2019). ... We compare against two baselines ... The results are listed in Table 1.
Researcher Affiliation Academia Ana Lucic,1 Harrie Oosterhuis,2 Hinda Haned,1 Maarten de Rijke1 1 University of Amsterdam 2 Radboud University
Pseudocode No The paper provides mathematical formulas and conceptual diagrams but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper mentions that code for the DACE baseline was 'kindly provided to us by the authors,' but it does not include any statement or link for the open-source release of the code for their own proposed method, FOCUS.
Open Datasets Yes We evaluate FOCUS on four binary classification datasets: Wine Quality (UCI 2009), HELOC (FICO 2017), COMPAS (Ofer 2017), and Shopping (UCI 2019). The Wine Quality dataset (4,898 instances, 11 features)... The HELOC set (10,459 instances, 23 features)... The COMPAS dataset (6,172 instances, 6 features)... The Shopping dataset (12,330 instances, 9 features)...
Dataset Splits No We train three types of tree-based models on 70% of each dataset: Decision Trees (DTs), Random Forests (RFs), and Adaptive Boosting (AB) with DTs as the base learners. We use the remaining 30% to find counterfactual examples for this test set.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running its experiments.
Software Dependencies No The paper mentions using 'Adam (Kingma and Ba 2015)' and 'CPLEX Optimizer' (for the DACE baseline), but it does not specify version numbers for these or any other software dependencies such as programming languages or libraries (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We jointly tune the hyperparameters of FOCUS (σ, τ, β, α) using Adam (Kingma and Ba 2015) for 1,000 iterations.