Adaptive stimulus selection for optimizing neural population responses
Authors: Benjamin Cowley, Ryan Williamson, Katerina Clemens, Matthew Smith, Byron M. Yu
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In simulations, we first confirmed that population objective functions elicited more diverse stimulus responses than single-neuron objective functions. Then, we tested Adept in a closed-loop electrophysiological experiment in which population activity was recorded from macaque V4, a cortical area known for mid-level visual processing. Natural images chosen by Adept elicited mean neural responses that were 20% larger than those for randomly-chosen natural images, and also evoked a larger diversity of neural responses. |
| Researcher Affiliation | Academia | 1Machine Learning Dept., 2Center for Neural Basis of Cognition, 3Dept. of Electrical and Computer Engineering, 4Dept. of Biomedical Engineering, Carnegie Mellon University 5School of Medicine, 6Dept. of Neuroscience, 7Dept. of Ophthalmology, University of Pittsburgh |
| Pseudocode | Yes | Algorithm 1: Adept algorithm |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the Adept methodology is open-source or publicly available. |
| Open Datasets | Yes | We used the same candidate image pool of N 10,000 natural images from the Mc Gill natural image dataset [21] and Google image search [22]. [21] A. Olmos and F. A. Kingdom, A biologically inspired algorithm for the recovery of shading and reflectance images, Perception, vol. 33, no. 12, pp. 1463 1473, 2004. [22] Google google image search. http://images.google.com. Accessed: 2017-04-25. |
| Dataset Splits | No | The paper mentions using 'a held out set of responses not used for training' in the context of comparing prediction approaches, but it does not specify explicit training/validation/test splits (e.g., percentages or counts) for the neural recording data collected in their experiments. |
| Hardware Specification | Yes | The approach in Eqn. 2 performed the best (ρ = 0.64) and was the fastest (τ = 0.2 s) compared to the other prediction approaches (ρ = 0.39, 0.41, 0.23 and τ = 12.9 s, 1.5 s, 48.4 s, for the three other approaches, respectively). The remarkably faster speed of Eqn. 2 over other approaches comes from the evaluation of the objective function (fast matrix operations), the fact that no training of linear regression weight vectors is needed, and the fact that distances are directly predicted (unlike the approaches that first predict ˆrs and then must re-compute distances between ˆrs and rn1, . . . , rnt 1 for each candidate stimulus s). Due to its performance and fast computation time, we use the prediction approach in Eqn. 2 for the remainder of this work. We tested the performance of this approach versus three other possible prediction approaches. The first two approaches use linear ridge regression and kernel regression, respectively, to predict rs. Their prediction ˆrs is then used to evaluate the objective in place of rs. The third approach is a linear ridge regression version of Eqn. 2 to directly predict rs 2 and rs rnj 2. To compare the performance of these approaches, we developed a testbed in which we sampled two distinct populations of neurons from the same CNN, and asked how well one population can predict the responses of the other population using the different approaches described above. Formally, we let x1, . . . , x N be feature embedding vectors of q = 500 CNN neurons, and response vectors rn1, . . . , rn800 be the responses of p = 200 different CNN neurons to 800 natural images. CNN neurons were from the same Goog Le Net CNN [18] (see CNN details in Results). To compute performance, we took the Pearson s correlation ρ between the predicted and actual objective values on a held out set of responses not used for training. We also tracked the computation time τ (computed on an Intel Xeon 2.3GHz CPU with 36GB RAM) because these computations need to occur between stimulus presentations in an electrophysiological experiment. |
| Software Dependencies | No | The paper mentions 'Pre-trained CNNs were downloaded from Mat Conv Net [25], with the PVT version of Goog Le Net [26]', but it does not specify version numbers for MatConvNet or the PVT version used. |
| Experiment Setup | Yes | In both settings, we used the same candidate image pool of N 10,000 natural images from the Mc Gill natural image dataset [21] and Google image search [22]. For the predictive feature embeddings in both settings, we used responses from a pre-trained CNN different from the CNN used as a surrogate for the brain in the first setting... For this CNN, we took responses of p = 200 neurons in a middle layer of the pre-trained Res Net CNN [24] (layer 25 of 50, named res3dx ). A second CNN is used for feature embeddings to predict responses of the first CNN. For this CNN, we took responses of q = 750 neurons in a middle layer of the pre-trained Goog Le Net CNN [18] (layer 5 of 10, named icp4_out )... We ran Adept for 2,000 out of the 10,000 candidate images (with Ninit = 5 and kernel bandwidth h = 200 similar results were obtained for different h)... We implanted a 96-electrode array in macaque V4... On each trial, a monkey fixated on a central dot while an image flashed four times... The spike counts for each neural unit were averaged across the four 100 ms flashes... For the predictive feature embeddings, we used q = 500 CNN neurons in the fifth layer of Goog Le Net CNN (kernel bandwidth h = 200). In each recording session, the monkey typically performed 2,000 trials (i.e., 2,000 of the N =10,000 natural images would be sampled). Each Adept run started with Ninit = 5 randomly-chosen images. |