Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

Authors: Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David Cox, Josh McDermott, James J DiCarlo, Sueyeon Chung

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we use recently developed manifold analysis techniques from computational neuroscience [20] to look beyond accuracy and investigate the internal neural population geometry [21] of standard, adversarially trained, and biologically-inspired stochastic networks in response to clean and adversarially perturbed examples in both visual and auditory domains. We present several key findings: Using manifold analysis, we demonstrate that standard, adversarially trained, and stochastic networks each have distinct geometric signatures in response to clean and adversarially perturbed stimuli, shedding light on varied robustness mechanisms. We demonstrate the generality of our findings by translating the results to a novel biologically-inspired auditory ANN, Stoch Coch Res Net50, that includes stochastic responses. Stochasticity makes auditory networks more robust to adversarial perturbations, and the underlying neural population geometry is largely consistent with that in vision networks. Analysis of stochastic networks reveals a protective overlap between the representations of adversarial examples and clean stimuli, and quantitatively demonstrates that competing geometric effects of stochasticity mediate a tradeoff between adversarial and clean performance.
Researcher Affiliation Collaboration 1Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology 2Mc Govern Institute for Brain Research, Massachusetts Institute of Technology 3School of Engineering and Applied Sciences, Harvard University 4Center for Brains, Minds and Machines, Massachusetts Institute of Technology 5MIT-IBM Watson AI Lab 6Speech and Hearing Bioscience and Technology, Harvard University 7Center for Theoretical Neuroscience, Columbia University 8Zuckerman Institute, Columbia Univeristy
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes 1See https://github.com/chung-neuroai-lab/adversarial-manifolds for accompanying code.
Open Datasets Yes We use images sampled from the Image Net [37] test set. Auditory models are trained to perform the word recognition task in the Word-Speaker-Noise dataset introduced in [9]. We investigate how the level of noise changes the manifold geometry in a smaller model trained on the CIFAR-10 dataset [45] with an architecture similar to Res Net18 [2], with the first conv-relu-maxpool layers replaced by fixed-weight Gabor filters and biological-inspired activation functions adapted from the Gaussian VOne Net4.
Dataset Splits Yes We use the scikit-learn SVM implementation with a train/test split of 80/20. For class manifold analysis, the clean stimulus set consists of 50 classes, with each class containing 50 unique exemplar images for a total of 2500 unique images. For exemplar manifold analysis, 100 unique images are sampled from the Image Net test set and each is perturbed with FGSM from a random starting location 50 times for 5000 unique images.
Hardware Specification No The paper states: 'All experiments were performed on the MIT BCS Open Mind Computing Cluster.', but does not provide specific hardware details like GPU/CPU models or memory.
Software Dependencies No The paper mentions 'scikit-learn SVM implementation' but does not specify its version number or any other software dependencies with version details.
Experiment Setup No The paper states: 'Training and adversarial robustness details are presented in SM 4.1 and SM 4.4 respectively.', deferring specific experimental setup details to supplementary materials. The main text does not include hyperparameters or detailed training configurations.