Robust Counterfactual Explanations for Tree-Based Ensembles
Authors: Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, Daniele Magazzeni
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here, we present our experimental results on benchmark datasets, namely, German Credit (Dua & Graff, 2017) and HELOC (FICO, 2018). |
| Researcher Affiliation | Industry | 1JP Morgan Chase AI Research. |
| Pseudocode | Yes | Algorithm 1 Rob X: Generating Robust Counterfactual Explanations for Tree-Based Ensembles |
| Open Source Code | No | The paper does not provide explicit statements or links for the open-sourcing of their methodology's code. |
| Open Datasets | Yes | German Credit (Dua & Graff, 2017), and HELOC (FICO, 2018) |
| Dataset Splits | No | For each of these datasets, we set aside 30% of the dataset for testing, and use the remaining 70% for training (in different configurations as discussed here). On the training data, we again perform a 50/50 split. The paper describes training and test splits, but no explicit validation split percentage or details. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, or cloud instance specifications) used for running experiments. |
| Software Dependencies | No | In this work, we use an existing implementation of computing LOF from scikit (scikit learn) and train an XGBoost Model after tuning the hyperparameters using the hyperopt package. However, specific version numbers for these or other software dependencies are not provided. |
| Experiment Setup | Yes | Because the feature values are normalized, a fixed choice of K = 1000 and σ = 0.1 is used for all our experiments. |