Brain-like Flexible Visual Inference by Harnessing Feedback Feedforward Alignment

Authors: Tahereh Toosi, Elias Issa

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our study, we demonstrate the effectiveness of FFA in co-optimizing classification and reconstruction tasks on widely used MNIST and CIFAR10 datasets.
Researcher Affiliation Academia Tahereh Toosi Center for Theoretical Neuroscience Zuckerman Mind Brain Behavior Institute Columbia University New York, NY tahereh.toosi@columbia.edu Elias B. Issa Department of Neuroscience Zuckerman Mind Brain Behavior Institute Columbia University New York, NY elias.issa@columbia.edu
Pseudocode Yes A pseudo-code for a simple three-layer neural network to clarify the parameter updates in FFA is in algorithm 1. [...] Algorithm 1 Training a simple three-layer network by FFA [...] Algorithm 2 The core visual inference algorithm (adapted with minimal modification from Kadkhodaie and Simoncelli (2021)))
Open Source Code Yes Code is available at: https://github.com/toosi/Feedback_Feedforward_Alignment
Open Datasets Yes We demonstrate the effectiveness of FFA in co-optimizing classification and reconstruction tasks on widely used MNIST and CIFAR10 datasets.
Dataset Splits No The paper mentions using MNIST and CIFAR10 datasets and discusses training and testing, but it does not explicitly provide specific train/validation/test split percentages, absolute sample counts, or explicit descriptions of the splitting methodology. While these datasets have standard splits, the paper does not state how they were applied.
Hardware Specification No The paper mentions 'a NVIDIA GPU grant' and that work was 'performed using the Columbia Zuckerman Axon GPU cluster', but it does not specify exact GPU models (e.g., A100, V100), CPU models, or other detailed hardware specifications for the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names with their exact versions) that were used in the experiments.
Experiment Setup Yes For the experiment on a fully connected architecture, we used a 4-layer network with [1024, 256,256,10] neurons in each layer and Re LU non-linearity between layers. For the experiment on a convolutional architecture, we used a modified version of Res Net (He et al., 2015), where the last convolutional layer has the same number of channels as classes, and an adaptive average pooling operator is used to read out of each channel. [...] We added Gaussian noise with zero mean and varied the variance σ2 = [0.0, 0.4, 0.8, 1.0] to assess the robustness of models to input noise.