Unlocking Feature Visualization for Deep Network with MAgnitude Constrained Optimization

Authors: Thomas FEL, Thibaut Boissin, Victor Boutin, Agustin PICARD, Paul Novello, Julien Colin, Drew Linsley, Tom ROUSSEAU, Remi Cadene, Lore Goetschalckx, Laurent Gardes, Thomas Serre

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach yields significantly better results both qualitatively and quantitatively and unlocks efficient and interpretable feature visualizations for large state-of-the-art neural networks. We also show that our approach exhibits an attribution mechanism allowing us to augment feature visualizations with spatial importance. We validate our method on a novel benchmark for comparing feature visualization methods, and release its visualizations for all classes of the Image Net dataset on Lens.
Researcher Affiliation Collaboration 1Carney Institute for Brain Science, Brown University 2Artificial and Natural Intelligence Toulouse Institute 3Institut de Recherche Technologique Saint-Exupery 4Innovation & Research Division, SNCF 5ELLIS Alicante, Spain.
Pseudocode Yes Algorithm 1 MACO
Open Source Code Yes Finally, we used the implementation of [1] and CBR which are available in the Xplique library [66] which is based on Lucid.
Open Datasets Yes We validate our method on a novel benchmark for comparing feature visualization methods, and release its visualizations for all classes of the Image Net dataset on Lens. The website allows browsing the most important concepts learned by a Res Net50 for all 1, 000 classes of Image Net [42].
Dataset Splits No The paper mentions using the "Image Net validation set" for computing FID scores but does not specify how the training, validation, and test splits were defined for the models they evaluate, as these models are pre-trained. It does not provide explicit split percentages or counts for their own experiments' data partitioning.
Hardware Specification No The paper mentions "The computing hardware was supported in part by NIH Office of the Director grant #S10OD025181 via the Center for Computation and Visualization (CCV) at Brown University." but does not provide specific hardware details (e.g., GPU models, CPU types, memory).
Software Dependencies No The paper mentions using "the Xplique library [66] which is based on Lucid" and "NAdam optimizer [65]" but does not provide specific version numbers for these software components.
Experiment Setup Yes We used the NAdam optimizer [65] with lr = 1.0 and N = 256 optimization steps. For MACO, τ only consists of two transformations; first we add uniform noise δ U([ 0.1, 0.1])W H and crops and resized the image with a crop size drawn from the normal distribution N(0.25, 0.1), which corresponds on average to 25% of the image.