Long Sequence Hopfield Memory
Authors: Hamza Chaudhry, Jacob Zavatone-Veth, Dmitry Krotov, Cengiz Pehlevan
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We find a close match between theoretical calculation and numerical simulation, and further establish the ability of this model to store and recall sequences of correlated patterns. Figure 2: Testing the transition and sequence capacities of Dense Nets with polynomial and exponential nonlinearities. A. Scaling of transition capacity (log10(PT), left) and sequence capacity (log10(PS), right) with network size. [...] We showcase a 10 image subsequence. |
| Researcher Affiliation | Collaboration | 1John A. Paulson School of Engineering and Applied Sciences, 2Center for Brain Science, 3Department of Physics, 4Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University Cambridge, MA 02138 5MIT-IBM Watson AI Lab, IBM Research, Cambridge, MA 02142 |
| Pseudocode | No | The paper describes mathematical update rules (e.g., Equation 1, 2, 11, 13, 14, 18) but does not present them in pseudocode blocks or algorithm listings. |
| Open Source Code | Yes | Source code is available on Git Hub at https://github.com/Pehlevan-Group/ Long Sequence Hopfield Memory. |
| Open Datasets | Yes | For demonstration, we store a sequence of 200000 highly-correlated images from the Moving MNIST dataset and attempt to recall this sequence using Dense Nets with different nonlinearities [42]. |
| Dataset Splits | No | The paper does not explicitly describe train/validation/test dataset splits with percentages or sample counts. It mentions using 'Rademacher distribution' for patterns and 'Moving MNIST dataset' but does not specify how these were partitioned for training or validation. |
| Hardware Specification | Yes | Experiments were run on the Harvard University FASRC Cannon HPC cluster (https://www.rc.fas.harvard.edu/), using Nvidia A100 80GB GPUs. |
| Software Dependencies | No | The paper mentions 'Source code is available on Git Hub' but does not specify any software dependencies (e.g., Python, PyTorch, TensorFlow) with version numbers required to reproduce the experiments. |
| Experiment Setup | Yes | Seq Net and Polynomial Dense Net (d = 2) are simulated with N = 300 neurons and P = 100 patterns. ... For demonstration, we store a sequence of 200000 highly-correlated images from the Moving MNIST dataset ... using Dense Nets of size N = 784. ... We simulate Mixed Nets with N = 100, τ = 5, and attempt to store P = 40 patterns. |