One Explanation is Not Enough: Structured Attention Graphs for Image Classification

Authors: Vivswan Shitole, Fuxin Li, Minsuk Kahng, Prasad Tadepalli, Alan Fern

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a user study comparing the use of SAGs to traditional saliency maps for answering comparative counterfactual questions about image classifications. Our results show that user accuracy is increased significantly when presented with SAGs compared to standard saliency map baselines.
Researcher Affiliation Academia Oregon State University {shitolev, lif, minsuk.kahng, tadepall, alan.fern}@oregonstate.edu
Pseudocode No The paper describes algorithms verbally but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Source code for generating SAGs: https://github.com/viv92/structured-attention-graphs
Open Datasets Yes The Image Net validation dataset of 50, 000 images is used for our analysis. ... ImageNet [7]
Dataset Splits No The paper analyzes explanations for pre-trained CNNs (VGGNet, ResNet-50) using the ImageNet validation dataset. It does not describe training/validation/test splits for a model trained by the authors.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes we set Ph = 0.9 as asufficiently high fraction in our experiments. ... We set the hyperparameter r = 7 in all our experiments. ... We set m = 10 and vary 0 < k < m as hyperparameters. ... We chose q = 15 as a hyperparameter. ... The constant c is set to 3... Pl is set to 40%...