CMACE: CMAES-based Counterfactual Explanations for Black-box Models
Authors: Xudong Yin, Yao Yang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments also demonstrate that CMACE is superior to a SOTA model-specific approach (Flexible Optimizable Counterfactual Explanations for Tree Ensembles, FOCUS) that is designed for tree-based models using gradient-based optimization. |
| Researcher Affiliation | Collaboration | 1Ant Group, Hangzhou, China 2Zhejiang Lab, Hangzhou, China |
| Pseudocode | Yes | Algorithm 1 CMACE |
| Open Source Code | Yes | Our code is available at https://github.com/liuxia2023/cmace. |
| Open Datasets | Yes | The four datasets used are Heloc (FICO x ML Challenge Heloc Dataset), Wine (UCI Wine Quality Data Set), Compass (Kaggle Compass Dataset) and Shopping (UCI Shopping Dataset)... whose training data, testing data and model files are available at https://github.com/a-lucic/focus. |
| Dataset Splits | No | The paper provides details on training and test samples but does not explicitly mention or specify a validation dataset split or the methodology for creating such a split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions software for datasets and models used (e.g., tree-based models, LR, MLP, SVM) but does not provide specific version numbers for any software dependencies or libraries required for reproducibility. |
| Experiment Setup | No | The paper states, "All the models are trained on training data whose hyperparameters are carefully tuned for better classification performance," but it does not provide the specific hyperparameter values or detailed training configurations necessary to reproduce the experimental setup. |