Efficient characterization of electrically evoked responses for neural interfaces
Authors: Nishal Shah, Sasidhar Madugula, Pawel Hottowy, Alexander Sher, Alan Litke, Liam Paninski, E.J. Chichilnisky
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This work tests the idea that using prior information from previous experiments and closed-loop measurements may greatly increase the efficiency of the neural interface. Large-scale, high-density electrical recording and stimulation in primate retina were used as a lab prototype for an artificial retina. |
| Researcher Affiliation | Academia | Nishal P. Shah Stanford University Sasidhar Madugula Stanford University Pawel Hottowy AGH University of Science and Technology Alexander Sher University of California, Santa Cruz Alan Litke University of California, Santa Cruz Liam Paninski Columbia University E.J. Chichilnisky Stanford University |
| Pseudocode | No | The paper describes algorithms using mathematical equations and textual descriptions, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code: https://github.com/Chichilnisky-Lab/shah-neurips-2019 |
| Open Datasets | No | The paper describes using 'Extracellular recording and stimulation of primate retinal ganglion cells ex vivo' and 'simulated data' but does not provide concrete access information (link, DOI, specific repository, or formal citation with author and year) for a publicly available or open dataset. |
| Dataset Splits | No | The paper mentions 'cross-validation' for parameter selection, but it does not provide specific details on dataset splits such as percentages, sample counts, or references to predefined splits for training, validation, and testing. |
| Hardware Specification | No | The paper describes the '512-electrode technology' for recording and stimulation, but it does not provide specific details about the computing hardware (e.g., GPU models, CPU types, or server specifications) used for running the experiments or analysis. |
| Software Dependencies | No | The paper mentions 'Adam optimization' and 'custom software' but does not list specific software components with their version numbers. |
| Experiment Setup | Yes | Optimization is performed using Adam (learning rate = 0.01), with λL1 chosen using cross-validation and temperature τ is reduced to 0.8 times its previous value every time the loss converges. |