Fast Sampling-Based Inference in Balanced Neuronal Networks

Authors: Guillaume Hennequin, Laurence Aitchison, Mate Lengyel

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first show analytically and through simulations that the symmetry of the synaptic weight matrix implied by LS yields critically slow mixing when the posterior is high-dimensional. Next, using methods from control theory, we construct and inspect networks that are optimally fast, and hence orders of magnitude faster than LS, while being far more biologically plausible.
Researcher Affiliation Academia 1Computational & Biological Learning Lab, Dept. of Engineering, University of Cambridge, UK 2Gatsby Computational Neuroscience Unit, University College London, UK
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our code will be made freely available from GH s personal webpage.
Open Datasets No The paper uses a toy covariance matrix Σ drawn from an inverse Wishart distribution, whose parameters are specified (N = 200, σ2 0 = 2, σr = 0.2), but this is a generated dataset within the paper, not an externally accessible public dataset with a link or formal citation.
Dataset Splits No The paper does not explicitly describe training, validation, and test dataset splits for model training or evaluation, as its focus is on analyzing the dynamics of a sampling system rather than training a model on a fixed dataset.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies, libraries, or solver names with version numbers.
Experiment Setup Yes Parameter values: σξ = σ0 = 1. ... parameters N = 200, σ2 0 = 2 and σr = 0.2. ... We initialized S with random, weak and uncorrelated elements (cf. the end of Sec. 4, with ζ = 0.01), and ran the L-BFGS optimization algorithm using the gradient of Eq. 12 to minimize L(S) (with λL2 = 0.1). ... Parameters: N = 200, NI = 100, σξ = 1, τm = 20 ms. ... The first two networks are of size N = 200, while the optimized E/I network has size N +NI = 300.