Associative Memories via Predictive Coding

Authors: Tommaso Salvatori, Yuhang Song, Yujian Hong, Lei Sha, Simon Frieder, Zhenghua Xu, Rafal Bogacz, Thomas Lukasiewicz

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To test the model s capabilities, we perform multiple retrieval experiments from both corrupted and incomplete data points. In an extensive comparison, we show that this new model outperforms in retrieval accuracy and robustness popular associative memory models, such as autoencoders trained via backpropagation, and modern Hopfield networks.
Researcher Affiliation Academia Tommaso Salvatori1, Yuhang Song1,3, , Yujian Hong1, Lei Sha1, Simon Frieder1, Zhenghua Xu2, Rafal Bogacz3, Thomas Lukasiewicz1 1Department of Computer Science, University of Oxford, UK 2State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China 3MRC Brain Network Dynamics Unit, University of Oxford, UK
Pseudocode Yes Algorithm 1: Learning to generate s with IL
Open Source Code No No, the paper does not provide any explicit statements about open-source code availability or links to a code repository.
Open Datasets Yes To experimentally show that generative PCNs are AMs, we trained a 2-layer network with Re LU non-linearity on a subset of 100 images of Tiny Image Net and CIFAR10. [...] We have trained a Hopfield network on N = {2, 3, 5, 10} images of the MNIST dataset. [...] We also test our model on Image Net
Dataset Splits No No, the paper describes the datasets used for training and then the process of retrieving/reconstructing images from corrupted or partial versions of those same trained images, but it does not specify explicit train/validation/test dataset splits with percentages or counts.
Hardware Specification No No, the paper does not provide specific details about the hardware used to run the experiments.
Software Dependencies No No, the paper does not specify software dependencies with version numbers.
Experiment Setup Yes We trained 2-layer PCNs with Re LU non-linearity and hidden dimension n {512, 1024, 2048} on subsets of the aforementioned datasets of cardinality N = {100, 250, 500, 1000}. [...] We considered an image to be correctly reconstructed when the error between the original and the retrieved image was smaller than 0.001. [...] We trained two networks with hidden dimensions of 1024 and 2048, to generate 50 images of the first class of Tiny Image Net (corresponding to goldfishes), and a network of 8192 hidden neurons to reconstruct 25 Image Net images.