Representational Dissimilarity Metric Spaces for Stochastic Neural Networks
Authors: Lyndon Duong, Jingyang Zhou, Josue Nassar, Jules Berman, Jeroen Olieslagers, Alex H Williams
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Leveraging this novel framework, we find that the stochastic geometries of neurobiological representations of oriented visual gratings and naturalistic scenes respectively resemble untrained and trained deep network representations. Further, we are able to more accurately predict certain network attributes (e.g. training hyperparameters) from its position in stochastic (versus deterministic) shape space. and We studied a collection of 1800 trained networks spanning six variants of the VAE framework at six regularization strengths and 50 random seeds (Locatello et al., 2019). |
| Researcher Affiliation | Collaboration | Lyndon R. Duong,* Jingyang Zhou , Josue Nassar, Jules Berman, Jeroen Olieslagers, Alex H. Williams Center for Computational Neuroscience, Flatiron Institute, New York, NY, 10010 Center for Neural Science, New York University, New York, NY, 10003 {lyndon.duong, jingyang.zhou, alex.h.williams}@nyu.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It describes methods mathematically and textually. |
| Open Source Code | Yes | We discuss computational complexity in Appendix D.1.1 and provide user-friendly implementations of stochastic shape metrics at: github.com/ahwillia/netrep. |
| Open Datasets | Yes | We studied a collection of 1800 trained networks spanning six variants of the VAE framework at six regularization strengths and 50 random seeds (Locatello et al., 2019). Networks were trained on a synthetic image dataset called d Sprites (Matthey et al., 2017), which is a well-established benchmark within the VAE disentanglement literature. and We trained several hundred VAEs on MNIST and CIFAR-10 to confirm these results persisted across more complex datasets (Supp. Fig. 9; Supp. B.3). and We leveraged stochastic shape metrics to perform a preliminary study on primary visual cortical recordings (VISp) from K = 31 mice in the Allen Brain Observatory.1 The results we present below suggest: (a) across-animal variability in covariance geometry is comparable in magnitude to variability in trial-average geometry, (b) across-animal distances in covariance and trial-average geometry are not redundant statistics as they are only weakly correlated, and (c) the relative contributions of mean and covariance geometry to inter-animal shape distances are stimulus-dependent. Together, these results suggest that neural response distributions contain nontrivial geometric structure in their higher-order moments, and that stochastic shape metrics can help dissect this structure. 1See appendix B.2 for full details. Data are available at: observatory.brain-map.org/visualcoding/ |
| Dataset Splits | Yes | Models used in this study were from checkpoints corresponding to the lowest validation loss during training. |
| Hardware Specification | No | The paper mentions distributing calculations over single CPU cores but does not specify any particular hardware like GPU/CPU models, memory, or cloud instance types used for the experiments. |
| Software Dependencies | No | The paper mentions PyTorch and the OSQP solver but does not provide specific version numbers for these or other key software dependencies. |
| Experiment Setup | Yes | Model training used batch sizes of 64 images, and up to 1000 training epochs. We used the Adam optimizer with 1E-4 learning rate and model checkpoints at each epoch. and We used stochastic gradient descent with a momentum of 0.9, batch size of 128 and weight decay of 1E-4. Networks were trained for 200 epochs where the learning rate was initially set to 0.1 and halved every 60 epochs. |