What does LIME really see in images?
Authors: Damien Garreau, Dina Mardaoui
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show experimentally that for models that are sufficiently smooth with respect to their inputs, the outputs of LIME are similar to the sum over superpixels of integrated gradients, another interpretability method. In this section, we show experimentally that LIME explanations are similar to the approximated explanations derived in the previous section. |
| Researcher Affiliation | Academia | 1Universit e Cˆote d Azur, Inria, CNRS, LJAD, France 2Polytech Nice. |
| Pseudocode | No | The paper describes methods in prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code of all experiments is available at https://github.com/dgarreau/image_ lime_theory |
| Open Datasets | Yes | We first considered images from the CIFAR10 dataset (Krizhevsky et al., 2009), that is, 32 32 RGB images belonging to ten categories. We then moved to more realistic images coming from the test set of the 2017 large scale visual recognition challenge (LSVRC, Russakovsky et al., 2015). |
| Dataset Splits | No | The paper mentions using a 'subset of 1000 images of the test set' for experiments and pre-trained models, but does not specify train/validation/test splits for model training or evaluation in a way that allows reproduction of data partitioning beyond the subset of the test set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'three pretrained models from the Keras framework' but does not specify version numbers for Keras or any other software dependencies. |
| Experiment Setup | Yes | In each case, we ran LIME with n = 1000 examples, default regularization λ = 1 and zero replacement. For the sum of integrated gradients, we considered m = 20 steps in Eq. (11) as in Sundararajan et al. (2017). with the exception of the kernel size used by the quickshift algorithm which we decreased to 1 to get wider superpixels. |