Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Authors: Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. (2017) with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map. |
| Researcher Affiliation | Academia | Judy Borowski , Roland S. Zimmermann , Judith Schepers, Robert Geirhos, Thomas S. A. Wallis , Matthias Bethge , Wieland Brendel University of T ubingen, Germany |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and data is available at https://bethgelab.github.io/testing visualizations/ |
| Open Datasets | Yes | The natural stimuli are selected from the validation set of the Image Net ILSVRC 2012 dataset (Russakovsky et al., 2015) according to their activations for the feature map of interest. |
| Dataset Splits | Yes | The natural stimuli are selected from the validation set of the Image Net ILSVRC 2012 dataset (Russakovsky et al., 2015) according to their activations for the feature map of interest. |
| Hardware Specification | Yes | Stimulus presentation and data collection is controlled via a desktop computer (Intel Core i5-4460 CPU, AMD Radeon R9 380 GPU) |
| Software Dependencies | Yes | Stimulus presentation and data collection is controlled via a desktop computer (...) running Psycho Py (Peirce et al., 2019, version 3.0) under Python 3.6. (...) We perform the optimization using lucid 0.3.8 and Tensor Flow 1.15.0 (Abadi et al., 2015) |
| Experiment Setup | Yes | In both studies, the task is to choose the one image out of two natural query images (two-alternative forced choice paradigm) that the participant considers to also elicit a strong activation given some reference images (see Fig. 2). Apart from the image choice, we record the participant s confidence level and reaction time. Specifically, responses are given by clicking on the confidence levels belonging to either query image. (...) we use the hyperparameter as specified in Olah et al. (2017). |