Deep Learning Models of the Retinal Response to Natural Scenes
Authors: Lane McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, Stephen Baccus
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We find that deep neural network models markedly outperform previous models in predicting retinal responses both for white noise and natural scenes. |
| Researcher Affiliation | Academia | Lane T. Mc Intosh 1, Niru Maheswaranathan 1, Aran Nayebi1, Surya Ganguli2,3, Stephen A. Baccus3 1Neurosciences Ph D Program, 2Department of Applied Physics, 3Neurobiology Department Stanford University {lmcintosh, nirum, anayebi, sganguli, baccus}@stanford.edu |
| Pseudocode | No | The paper contains architectural diagrams (Figure 1, Figure 2.1) but no pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | No | The spiking activity of a population of tiger salamander retinal ganglion cells was recorded in response to both sequences of natural images jittered with the statistics of eye movements and high resolution spatiotemporal white noise. |
| Dataset Splits | Yes | More details on the stimuli, retinal recordings, experimental structure, and division of data for training, validation, and testing are given in the Supplemental Material. |
| Hardware Specification | Yes | LM: NSF, NVIDIA Titan X Award |
| Software Dependencies | No | Optimization was performed using ADAM [20] via the Keras and Theano software libraries [21]. This lists software libraries but does not provide specific version numbers. |
| Experiment Setup | Yes | Model parameters were optimized to minimize a loss function corresponding to the negative log-likelihood under Poisson spike generation. Optimization was performed using ADAM [20] via the Keras and Theano software libraries [21]. The networks were regularized with an ℓ2 weight penalty at each layer and an ℓ1 activity penalty at the final layer, which helped maintain a baseline firing rate near 0 Hz. Models were trained over the course of 100 epochs, with early-stopping guided by a validation set. |