Robust Explanation for Free or At the Cost of Faithfulness

Authors: Zeren Tan, Yang Tian

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We are the first to introduce this trade-off and theoretically prove its existence for Smooth Grad. Theoretical findings are verified by empirical evidence on six state-of-the-art explanation methods and four backbones.
Researcher Affiliation Academia 1Institute for Interdisciplinary Information Science, Tsinghua University, 100084, Beijing, China 2Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, 100084, Beijing, China 3Department of Psychology, Tsinghua University, 100084, Beijing, China.
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper mentions using third-party libraries and their implementations (e.g., 'robustness library', 'vision-transformers-cifar10', 'captum'), providing links to these. However, it does not state that the authors are releasing their own source code for the methodology described in the paper.
Open Datasets Yes We perform our experiments on 1000 randomly selected images from CIFAR10.
Dataset Splits No The paper mentions using '1000 randomly selected images from CIFAR10' for experiments but does not provide specific details about training, validation, or test splits for this dataset, nor does it refer to predefined splits with citations for its own experimental setup.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models, memory details) used to run its experiments. It only mentions using general libraries for training models.
Software Dependencies No The paper mentions several software components like 'robustness library', 'vision-transformers-cifar10', 'captum' for implementation, but it does not provide specific version numbers for these or other key software dependencies.
Experiment Setup Yes The parameters we use for each explanation method are as follows: Integrated Gradient: We use zero baseline and 10 intermediate points to compute the integral. Smooth Grad: We use σ = 0.03 as the default value. The number of samples n is determined by σ. For σ < 0.01, n = 10. For σ ≥ 0.01, n = σ/0.01 * 10. LIME: Quickshift segmentation algorithm is used to segment images to superpixels. kennel size is set to 1, max dist is set to 200, and ratio is set to 0.1. We choose num samples as 100 and α in Ridge regressor as 1. SHAP: We choose num samples to be 100.