Phase transitions in when feedback is useful
Authors: Lokesh Boominathan, Xaq Pitkow
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Here we offer a more general theory of inference that accounts for the costs and reliabilities of the feedback and feedforward channels, and the relative importance of good inferences about the latent world state. We formulate the inference problem as control via message-passing on a graph, maximizing how well an inference tracks a target state while minimizing the message costs. Our theory enables us to determine the optimal predictions and how they are integrated into computationally constrained inference. This analysis reveals phase transitions in when feedback is helpful, as we change the computation parameters or the world dynamics. |
| Researcher Affiliation | Academia | Lokesh Boominathan Department of ECE Rice University Houston, TX 77005 lokesh.boominathan@rice.edu Xaq Pitkow Dept. of Neuroscience, Dept. of ECE Baylor College of Medicine, Rice University Houston, TX 77005 xaq@rice.edu |
| Pseudocode | No | The paper describes its mathematical framework and solution approach but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | The paper is theoretical and defines a synthetic problem setup, not using or providing access to any specific public datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments with dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe specific hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not list specific software dependencies with version numbers. |
| Experiment Setup | No | The paper describes its theoretical model and optimization but does not provide specific details on an experimental setup, such as hyperparameters or training configurations for a computational experiment. |