On the (In)fidelity and Sensitivity of Explanations
Authors: Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform our experiments on randomly selected images from MNIST, CIFAR-10, and Image Net. In our comparisons, we restrict local variants of the explanations to MNIST, since sensitivity of function values given pixel perturbations make more sense for grayscale rather than color images. To calculate our infidelity measure, we use the noisy baseline perturbation for local variants of the explanations, and the square removal for global variants of the explanations, and use Monte Carlo Sampling to estimate the measures. We show results comparing sensitivity and infidelity for local explanations on MNIST and global explanations on MNIST, CIFAR-10, and Image Net in Table 1. |
| Researcher Affiliation | Academia | Chih-Kuan Yeh , Cheng-Yu Hsieh :, Arun Sai Suggala ; Department of Machine Learning Carnegie Mellon University; David I. Inouye School of Electrical and Computer Engineering Purdue University; Pradeep Ravikumar Department of Machine Learning Carnegie Mellon University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Implementation available at https://github.com/chihkuanyeh/saliency_evaluation. |
| Open Datasets | Yes | We perform our experiments on randomly selected images from MNIST, CIFAR-10, and Image Net. |
| Dataset Splits | No | The paper mentions using 'randomly selected images from MNIST, CIFAR-10, and Image Net' and training models, but does not provide specific train/validation/test split percentages, sample counts, or explicit details about splitting methodology or predefined standard splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper provides a general description of the experimental setup, including datasets and perturbation methods used ('noisy baseline perturbation' and 'square removal', 'Monte Carlo Sampling'), but does not include specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) for model training. |