Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses
Authors: Jacob Granley, Tristan Fauvel, Matthew Chalk, Michael Beyeler
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the viability of this approach on a novel, state-of-the-art visual prosthesis model. We show that our approach quickly learns a personalized stimulus encoder, leads to dramatic improvements in the quality of restored vision, and is robust to noisy patient feedback and misspecifications in the underlying forward model. Overall, our results suggest that combining the strengths of deep learning and Bayesian optimization could significantly improve the perceptual experience of patients fitted with visual prostheses and may prove a viable solution for a range of neuroprosthetic technologies. |
| Researcher Affiliation | Collaboration | Jacob Granley Department of Computer Science University of California, Santa Barbara jgranley@ucsb.edu Tristan Fauvel Institut de la Vision, Sorbonne Université 17 rue Moreau, F-75012 Paris, France Now with Quinten Health t.fauvel@quinten-health.com Matthew Chalk Institut de la Vision, Sorbonne Université 17 rue Moreau, F-75012 Paris, France matthew.chalk@inserm.fr Michael Beyeler Department of Computer Science Department of Psychological & Brain Sciences University of California, Santa Barbara mbeyeler@ucsb.edu |
| Pseudocode | No | The paper includes architectural diagrams (Figure B.1) and describes processes in text, but it does not feature a formal 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code for the forward model, DSE, and HILO algorithm is available at https://github.com/bionicvisionlab/2023-Neur IPS-HILO. |
| Open Datasets | Yes | We used MNIST images as target visual percepts throughout the experiments. |
| Dataset Splits | Yes | After every duel, we evaluated the DSE parameterized by the current prediction of patient-specific parameters on a subset of the MNIST test set. [...] For each of these candidate kernel-hyperparameter pairs, we fit a GP with the corresponding kernel and hyperparameters to 50 training duels for each of the other 9 patients. Then, the performance of the candidate GP was evaluated on the remaining 550 data points using Brier score [43] on a held-out test set. |
| Hardware Specification | Yes | Tensorflow 2.12, an NVIDIA RTX 3090, Adam optimizer, and batch size of 256 [38, 39] were used to train the network. |
| Software Dependencies | Yes | Tensorflow 2.12, an NVIDIA RTX 3090, Adam optimizer, and batch size of 256 [38, 39] were used to train the network. |
| Experiment Setup | Yes | Tensorflow 2.12, an NVIDIA RTX 3090, Adam optimizer, and batch size of 256 [38, 39] were used to train the network. [...] During training, ϕ were randomly sampled from the range of allowed parameters (Table 1). [...] We set σ to be 0.01, chosen empirically based on a conservative estimate: when the error difference was greater than 0.01 it was obvious which percept was better to human observers. |