Learning Attractor Dynamics for Generative Memory
Authors: Yan Wu, Gregory Wayne, Karol Gregor, Timothy Lillicrap
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To confirm that the emerging attractor dynamics help memory retrieval, we experiment with the Omniglot dataset [22] and images from DMLab [6], showing that the attractor dynamics consistently improve images corrupted by noise unseen during training, as well as low-quality prior samples. |
| Researcher Affiliation | Industry | Yan Wu, Greg Wayne, Karol Gregor, Timothy Lillicrap Deep Mind {yanwu,gregwayne,karolg,countzero}@google.com |
| Pseudocode | Yes | Algorithm 1 Training the Dynamic Kanerva Machine (Single training step) |
| Open Source Code | Yes | For reference, our implementation of the memory module is provided at https://github.com/deepmind/dynamic-kanerva-machines. |
| Open Datasets | Yes | We tested our model on Ominglot [22] and frames from DMLab tasks [6]. |
| Dataset Splits | No | The paper describes training on episodes of 32 patterns and testing on episodes of varying lengths, but it does not specify explicit training, validation, and test dataset splits (e.g., percentages or counts) or a clear methodology for partitioning data into these sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimiser [19]' and 'TensorFlow s matrix_solve_ls function' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We trained all models using the Adam optimiser [19] with learning rate 1 10 4. We used 16 filters in the convnet and 32 100 memory for Omniglot, and 256 filters and 64 200 memory for DMLab. We used the Bernoulli likelihood function for Omniglot, and the Gaussian likelihood function for DMLab data. Uniform noise U(0, 1 128) was added to the labyrinth data to prevent the Gaussian likelihood from collapsing. |