What You See is What You Classify: Black Box Attributions

Authors: Steven Stalder, Nathanael Perraudin, Radhakrishna Achanta, Fernando Perez-Cruz, Michele Volpi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our attributions are superior to established methods both visually and quantitatively with respect to the PASCAL VOC-2007 and Microsoft COCO-2014 datasets. and We present three results to establish the advantages of our Explainer over existing approaches. and In Tab. 2 we illustrate the accuracy of the masking strategies, on VOC-2007 and COCO-2014, with VGG-16 and Res Net-50 as Explanandum, respectively.
Researcher Affiliation Academia Steven Stalder Swiss Data Science Center ETH Zurich, Switzerland Nathanaël Perraudin Swiss Data Science Center ETH Zurich, Switzerland Radhakrishna Achanta Swiss Data Science Center EPFL, Switzerland Fernando Perez-Cruz Swiss Data Science Center ETH Zurich, Switzerland Michele Volpi Swiss Data Science Center ETH Zurich, Switzerland
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes Code is available at: https://github.com/stevenstalder/NN-Explainer
Open Datasets Yes We evaluate our methodology on PASCAL Visual Object Classes (VOC-2007) [9] and Microsoft Common Objects in Context (COCO-2014) [18].
Dataset Splits Yes To this end, we use the full training set of VOC-2007 and 90% of the COCO-2014 training set for fine-tuning. To assess generalization, we use the test set of VOC-2007 and the validation set of COCO-2014, respectively. For choosing hyperparameters, we use the VOC-2007 validation set and the remaining 10% of the COCO-2014 training set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies Yes In this work, we implemented models and experiments using Py Torch [19] and Py Torch Lightning [10]. Pytorch lightning. Git Hub. Note: https://github. com/Py Torch Lightning/pytorch-lightning, 3:6, 2019.
Experiment Setup Yes For all our experiments, we resized input images to 224x224 pixels and normalized them on the mean and standard deviation of Image Net. and We formulate the loss as a combination of four terms: LE(x, Y, S, m, n) = Lc(x, Y, m) + λe Le(x, m) + λa La(m, n, S) + λtv Ltv(m, n), where λe, λa and λtv are hyperparameters balancing the loss terms.