Sequential Memory with Temporal Predictive Coding
Authors: Mufeng Tang, Helen Barron, Rafal Bogacz
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we show that the whitening step in single-layer t PC models results in more stable performance than the AHN and its modern variants [15] in sequential memory, due to the highly variable and correlated structure of natural sequential inputs; We show that t PC can successfully reproduce several behavioral observations in humans, including the impact of sequence length in word memories and the primacy/recency effect; |
| Researcher Affiliation | Academia | Mufeng Tang, Helen Barron, Rafal Bogacz MRC Brain Network Dynamics Unit, University of Oxford, UK {mufeng.tang, helen.barron, rafal.bogacz}@bndu.ox.ac.uk |
| Pseudocode | Yes | The memorization and recall pseudocode for the t PC models is provided in SM. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing code for the methodology or a link to a code repository. |
| Open Datasets | Yes | sequences of binarized MNIST images [56]; Moving MNIST dataset [58]; CIFAR10 [59] images; UCF101 [60] dataset |
| Dataset Splits | No | The paper describes using various datasets for experiments but does not provide specific train/validation/test split percentages or sample counts. |
| Hardware Specification | No | The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work. http://dx.doi.org/10.5281/zenodo.22558 |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | Yes | In this work we use β = 5 for all MCAHNs. (...) We then trained a 2-layer t PC with a hidden size 5 to memorize this sequence and queried it offline. (...) Using one-hot vectors to represent letters (i.e., each word in our experiment is an ordered combination of one-hot vectors of letters , a minimal example of a word with 3 letters being: [0, 1, 0], [1, 0, 0], [0, 0, 1]), we demonstrate accuracy as the proportion of perfectly recalled sequences across varying lengths. (...) Using one-hot vectors and a fixed sequence length 7 (6 positions are shown as the first position is given as the cue to initiate recall in our experiment), we visualize recall frequency at different positions across simulated sequences (100 repetitions, multiple seeds for error bars). |