Nonlinear dynamics of localization in neural receptive fields

Authors: Leon Lufkin, Andrew Saxe, Erin Grant

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our analytical model identifies a concise description of the higher-order statistics that drive emergence, and we validate both positive and negative predictions of this analytical model via simulations with many neurons; see Fig. 1 (right).We describe experiments to validate the generalizability of the analytical results from Section 3.
Researcher Affiliation Academia Leon Lufkin Yale University leon.lufkin@yale.eduAndrew Saxe Gatsby Unit & SWC, UCL a.saxe@ucl.ac.ukErin Grant Gatsby Unit & SWC, UCL erin.grant@ucl.ac.uk
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code to replicate experiments and figures at https://github.com/leonlufkin/localization.
Open Datasets Yes Since all datasets are procedurally generated, training depends on both the model architecture and the complexity of sampling the data... The methods for procedural generation of Ising, NLGP, and Kur data models are detailed in Section 2.3.
Dataset Splits No The paper mentions supervised training and batch gradient descent but does not explicitly provide information on specific train/validation/test dataset splits.
Hardware Specification No We run all experiments on a single CPU machine locally or on a compute cluster.
Software Dependencies No The paper mentions "Fast ICA algorithm from scikit-learn [Hyv99; Ped+11]" but does not provide specific version numbers for scikit-learn or any other software dependencies.
Experiment Setup Yes For simulations, we initialize the weights and biases as independent draws from an isotropic Gaussian distribution with scaled variance, and train with batch gradient descent with a fixed learning rate on the mean-squared error (MSE) evaluated on input-output pairs from the task... The models had N = 40 input units, K = 10 hidden units, and an initialization variance of 0.1.