Grid Saliency for Context Explanations of Semantic Segmentation
Authors: Lukas Hoyer, Mauricio Munoz, Prateek Katiyar, Anna Khoreva, Volker Fischer
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We investigate the effectiveness of grid saliency on a synthetic dataset with an artificially induced bias between objects and their context as well as on the real-world Cityscapes dataset using state-of-the-art segmentation networks. Our results show that grid saliency can be successfully used to provide easily interpretable context explanations and, moreover, can be employed for detecting and localizing contextual biases present in the data. |
| Researcher Affiliation | Industry | Lukas Hoyer Mauricio Munoz Prateek Katiyar Anna Khoreva Volker Fischer Bosch Center for Artificial Intelligence lukas.hoyer@outlook.com {firstname.lastname}@bosch.com |
| Pseudocode | No | The paper describes the grid saliency method and its gradient-based variants using mathematical formulations and textual descriptions, but it does not contain a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper states: 'The code for generating the proposed synthetic dataset with induced context biases can be found here: https://github.com/boschresearch/Grid Saliency-Toy Dataset Gen.' This link is for generating the dataset, not for the main methodology (grid saliency) described in the paper. |
| Open Datasets | Yes | The proposed synthetic toy dataset consists of gray scale images of size 64x64 pixels, generated by combining upscaled digits from MNIST [13] with foreground and background textures from [52, 53]... We use 500 finely annotated images of the Cityscapes validation set. |
| Dataset Splits | No | The paper mentions 'train/test splits' for the synthetic dataset and using the 'Cityscapes validation set' for experiments. However, it does not provide specific details on how these splits were handled across all experiments (e.g., exact percentages or sample counts for training, validation, and testing consistently for reproducibility). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'SGD' for optimization and architectures like 'U-Net', 'VGG16', 'DeepLabv3+', and 'Mobilenetv2'. However, it does not specify any software names with version numbers (e.g., specific deep learning frameworks like PyTorch or TensorFlow, or their versions) required for replication. |
| Experiment Setup | Yes | The saliency maps with a size of 4x4 are optimized using SGD with momentum of 0.5 and a learning rate of 0.2 for 100 steps starting with a 0.5 initialized mask. A weighting factor of λ = 0.05 is used... we optimize a coarse 16 by 32 pixel mask using SGD with a learning rate of 1 for 80 steps and use λ = 0.01. |