Full-Gradient Representation for Neural Network Visualization
Authors: Suraj Srinivas, François Fleuret
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally evaluate the usefulness of Full Grad in explaining model behaviour with two quantitative tests: pixel perturbation and remove-and-retrain. |
| Researcher Affiliation | Academia | Suraj Srinivas Idiap Research Institute & EPFL suraj.srinivas@idiap.ch François Fleuret Idiap Research Institute & EPFL francois.fleuret@idiap.ch |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | We use our pixel perturbation test to evaluate full-gradient saliency maps on the Imagenet 2012 validation dataset, using a VGG-16 model with batch normalization. We use ROAR to evaluate full-gradient saliency maps on the CIFAR100 dataset, using a 9-layer VGG model. |
| Dataset Splits | Yes | We use our pixel perturbation test to evaluate full-gradient saliency maps on the Imagenet 2012 validation dataset, using a VGG-16 model with batch normalization. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions models used (VGG-16, 9-layer VGG) and some post-processing parameters (e.g., bilinear Upsample, rescale, abs) for saliency map generation, but it does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or system-level training settings for the neural networks. |