On the relationship between variational inference and auto-associative memory
Authors: Louis Annabi, Alexandre Pitti, Mathias Quoy
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In section 5, we evaluate the obtained algorithms on the task of memory retrieval on two image datasets, and compare their performance with other AM models. We evaluate the proposed models on two image datasets: CIFAR10 and CLEVR. We measure performance using the percentage of properly retrieved patterns from associative memories containing N = 100 patterns. We report results in tables 2, 3 and 4. |
| Researcher Affiliation | Academia | Louis Annabi ETIS UMR 8051 CY Cergy Paris Université, ENSEA, CNRS Cergy, France louis.annabi@gmail.com; Alexandre Pitti ETIS UMR 8051 CY Cergy Paris Université, ENSEA, CNRS Cergy, France alexandre.pitti@ensea.fr; Mathias Quoy ETIS UMR 8051 CY Cergy Paris Université, ENSEA, CNRS Cergy, France mathias.quoy@ensea.fr |
| Pseudocode | Yes | Detailed derivations of this model are provided in appendix C, where the forward pass through the model is given in algorithm 1. Derivations of this model are provided in appendix C, where the forward pass through the model is given in algorithm 2. |
| Open Source Code | Yes | All code will be made publicly available at https://github.com/LouisAnnabi/memory_vae_inference. An anonymous version for review is available at: https://anonymous.4open.science/r/on_the_relationship_between_variational_inference_and_autoassociative_memory-9252/ |
| Open Datasets | Yes | We evaluate the proposed models on two image datasets: CIFAR10 [20] (MIT License) and CLEVR [17] (CC BY 4.0 License). |
| Dataset Splits | Yes | For the CIFAR10 dataset, we use the official train/test split, and for the CLEVR dataset, we use the official train/val/test split. |
| Hardware Specification | Yes | All experiments were run on a single machine with a NVIDIA RTX 3090 GPU and an Intel Core i7-10700K CPU. |
| Software Dependencies | No | The paper mentions "Python 3 using PyTorch" but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For all models using a VAE encoder or decoder, we trained the VAE model for 200 epochs using the Adam optimizer with a learning rate of 1e-4 and a batch size of 64. The Adam hyperparameters are β1 = 0.9, β2 = 0.999. The learning rate is decayed by a factor of 0.1 at epochs 100, 150 and 180. |