Complex priors and flexible inference in recurrent circuits with dendritic nonlinearities

Authors: Benjamin S. H. Lyo, Cristina Savin

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In simulations, we demonstrate several scenarios of prior and posterior encoding, including nonlinear manifolds embedded in a higher dimensional ambient space as priors and several likelihoods corresponding to bottom-up and top-down evidence. We numerically tested the quality of the samples generated by our neural circuit in a toy example of a two-dimensional nonlinear manifold (shaped as a swiss-roll , see Fig. 1D inset) with linear dimensionality 3, embedded in an ambient feature space with dimensionality N = 10. While the quality of samples is harder to estimate, we also find good quality representations of a high dimensional prior trained on the MNIST dataset (Deng, 2012) (see Suppl. B.6).
Researcher Affiliation Academia Benjamin S. H. Lyo Center for Neural Science New York University blyo@nyu.edu Cristina Savin Center for Neural Science, Center for Data Science New York University csavin@nyu.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes A software implementation of the model is available at https://github.com/Savin-Lab-Code/Lyo Savin2023.
Open Datasets Yes While the quality of samples is harder to estimate, we also find good quality representations of a high dimensional prior trained on the MNIST dataset (Deng, 2012) (see Suppl. B.6).
Dataset Splits No The paper does not provide specific train/validation/test dataset splits, percentages, or explicit sample counts for reproduction. It mentions evaluating similarity using KL divergence but not in the context of formal dataset splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions tools like 'Adam optimizer' and 'torch autograd package' but does not specify version numbers for any software dependencies.
Experiment Setup Yes The Adam optimizer with a learning rate of 3e-4 over 1.5e6 epochs. The Adam optimizer with a learning rate of 1e-4 over 5000 epochs. The Adam optimizer with a learning rate of 4e-3 for 5000 epochs. simulations use a depth of 7 and branching factor of 3, except in the most proximal section, which has a branching factor 4. Scalar γ is a hyperparameter that weighs the relative contribution of the log prior and log likelihood. This hyperparameter is set to 1.