Trust Regions for Explanations via Black-Box Probabilistic Certification
Authors: Amit Dhurandhar, Swagatam Haldar, Dennis Wei, Karthikeyan Natesan Ramamurthy
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our contributions include formalizing this problem, proposing solutions, providing theoretical guarantees for these solutions that are computable, and experimentally showing their efficacy on synthetic and real data. |
| Researcher Affiliation | Collaboration | 1IBM Research, Yorktown Heights, NY, USA 2Eberhard-Karls-Universität Tübingen, Tübingen, Germany. |
| Pseudocode | Yes | Our approach comprises Algorithms 1 and 2. Given the problem elements x0, f, θ defined in Section 2, Algorithm 1 outputs the largest half-width w that it claims is certified. |
| Open Source Code | Yes | Code will be available at https://github.com/Trusted-AI/AIX360. |
| Open Datasets | Yes | We experiment on two image datasets, namely Image Net (Deng et al., 2009) (224 224 dimensions) and CIFAR10 (Krizhevsky, 2009) (32 32 dimensions), and two tabular datasets, HELOC (FICO, 2018b) (23 dimensional) and Arrhythmia (Vanschoren et al., 2013) (195 dimensional). |
| Dataset Splits | No | The paper discusses training and testing, but no explicit validation set split percentages or methodology are provided in the main text. |
| Hardware Specification | Yes | We used 4-core machines with 64 GB RAM and 1 NVIDIA A100 GPU. |
| Software Dependencies | No | The paper mentions 'Gradient Boosted trees (with default settings) in scikit-learn' but does not specify a version number for scikit-learn or any other software dependencies. |
| Experiment Setup | Yes | In all the experiments the quality metric is fidelity as defined in eqn. 14 (in the Appendix), results are averaged over 10 runs, Q is varied from 10 to 10000, Z is set to 10, θ = 0.75 (in the main paper) and we used 4-core machines with 64 GB RAM and 1 NVIDIA A100 GPU. |