DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization
Authors: Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, Hiroki Arimura
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | By experiments on real datasets, we confirm the effectiveness of our method in comparison with existing methods for CE. |
| Researcher Affiliation | Collaboration | 1Hokkaido University 2Fujitsu Laboratories Ltd. 3Tokyo Institute of Technology |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper states, 'All codes were implemented in Python 3.6 with scikit-learn and IBM ILOG CPLEX v12.8.' However, it does not provide any concrete access information (e.g., URL or explicit statement of release) for the source code of their methodology. |
| Open Datasets | Yes | We used the FICO dataset (D = 23) [FICO et al., 2018] and german dataset (D = 61) [Dua and Graff, 2017] |
| Dataset Splits | No | The paper states: 'We randomly split each dataset into train (70%) and test (30%) instances,' but does not explicitly provide information about a separate validation dataset split. |
| Hardware Specification | Yes | All experiments were conducted on 64-bit Ubuntu 18.04.1 LTS with Intel Xeon E5-1620 v4 3.50GHz CPU and 62.8Gi B memory |
| Software Dependencies | Yes | All codes were implemented in Python 3.6 with scikit-learn and IBM ILOG CPLEX v12.8. |
| Experiment Setup | Yes | We randomly split each dataset into train (70%) and test (30%) instances, and trained ℓ2-regularized logistic regression (LR) classifiers and random forest (RF) classifiers with T = 100 decision trees on each training dataset, respectively. [...] We set λ = 1.0 for the FICO dataset and λ = 0.01 for the german dataset, respectively. |