Sanity Checks for Saliency Maps
Authors: Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. |
| Researcher Affiliation | Collaboration | Julius Adebayo , Justin Gilmer , Michael Muelly , Ian Goodfellow , Moritz Hardt , Been Kim juliusad@mit.edu, {gilmer,muelly,goodfellow,mrtz,beenkim}@google.com Google Brain University of California Berkeley |
| Pseudocode | No | The paper describes methods in prose and through mathematical formulations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | All code to replicate our findings will be available here: https://goo.gl/h Bmh Dt |
| Open Datasets | Yes | Inception v3 model trained on Image Net.; CNN Fashion MNIST MLPMNIST Inception v3 Image Net; MNIST test set for a CNN. |
| Dataset Splits | No | The paper mentions training models on datasets like ImageNet, MNIST, and Fashion MNIST, and refers to a “test set,” but does not provide specific training, validation, or test split percentages, sample counts, or explicit splitting methodologies for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or exact cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as library names with version numbers (e.g., TensorFlow 2.x, PyTorch 1.x), required to replicate the experiments. |
| Experiment Setup | No | The paper describes the general training process for models (e.g., training to >95% accuracy) but does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, number of epochs, optimizer settings) or detailed training configurations. |