Adaptation Properties Allow Identification of Optimized Neural Codes

Authors: Luke Rast, Jan Drugowitsch

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Here we solve an inverse problem: characterizing the objective and constraint functions that efficient codes appear to be optimal for, on the basis of how they adapt to different stimulus distributions. We formulate a general efficient coding problem, with flexible objective and constraint functions and minimal parametric assumptions. Solving special cases of this model, we provide solutions to broad classes of Fisher information-based efficient coding problems, generalizing a wide range of previous results. We show that different objective function types impose qualitatively different adaptation behaviors, while constraints enforce characteristic deviations from classic efficient coding signatures. Despite interaction between these effects, clear signatures emerge for both unconstrained optimization problems and information-maximizing objective functions. Asking for a fixed-point of the neural code adaptation, we find an objective-independent characterization of constraints on the neural code. We use this result to propose an experimental paradigm that can characterize both the objective and constraint functions that an observed code appears to be optimized for.
Researcher Affiliation Academia Luke Rast Harvard University lukerast@g.harvard.edu Jan Drugowitsch Department of Neurobiology Harvard Medical School jan drugowitsch@hms.harvard.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any links to open-source code for the methodology described.
Open Datasets No The paper uses 'simulated activity data' for an illustrative example in Figure 3, but this data is generated internally based on specified parameters, not sourced from a publicly available dataset with concrete access information.
Dataset Splits No The paper mentions 'fitting' and 'training' in the context of estimating neural activity parameters from simulated data, but it does not specify any training, validation, or test dataset splits for these processes.
Hardware Specification No The paper does not provide any specific details regarding the hardware used to run its simulations or analyses.
Software Dependencies No The paper mentions statistical methods like 'GLM regression' and 'neural network' modeling but does not list any specific software, libraries, or their version numbers.
Experiment Setup No The paper describes the parameters of the simulated data and the model used for illustration in Figure 3 (e.g., 'uniform stimulus distribution, Poisson activity, θ [5, 55], f(x) = x, C(θ) = sin(2π/50 θ) + 1 and M = 0.75 from 2000 trials each'). However, it does not specify hyperparameters or system-level training settings for the GLM regression or neural network fitting process itself (e.g., learning rate, optimizer details, number of epochs).