Constraint-Driven Explanations for Black-Box ML Models
Authors: Aditya A. Shrotri, Nina Narodytska, Alexey Ignatiev, Kuldeep S Meel, Joao Marques-Silva, Moshe Y. Vardi8304-8314
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical study demonstrates concrete uses of our tool CLIME in obtaining more meaningful explanations with high fidelity." and "5 Experiments We seek to answer the following research questions through our empirical study: 1. How scalable is the certification framework presented in Sec. 4? 2. What benefits do constraints provide for analysing ML models? 3. How susceptible are constrained explanations to adversarial attacks? |
| Researcher Affiliation | Collaboration | 1Rice University, Houston, USA 2VMware Research Inc., Palo Alto, USA 3Monash University, Melbourne, Australia 4National University of Singapore, Singapore 5IRIT, CNRS, Toulouse, France |
| Pseudocode | Yes | Algorithm 1: Explain With CLIME(f, ϕ, ε, N, x, x , πx, K)"; "Algorithm 2: compute Fidelity(f, g, ε, δ, γ)"; "Algorithm 3: check Threshold(f, g, ε, δ, γ) |
| Open Source Code | Yes | Code, results and full version of the text is available at https://gitlab.com/Shrotri/clime |
| Open Datasets | Yes | We consider the bank dataset (Moro, Cortez, and Rita 2014) that was also used in Deutch and Frost (2019)." and "Second, we consider the adult dataset (Kohavi 1996), originally taken from the Census bureau. |
| Dataset Splits | No | The paper mentions training models and evaluating accuracy but does not provide specific train/validation/test splits, percentages, or sample counts for reproducibility. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided. |
| Software Dependencies | No | The paper mentions various tools and models (CLIME, LIME, SHAP, random forest), but does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | We train 10 Random Forest models with different random seeds and 20 trees and max depth is 7. |