Associative Memory in Iterated Overparameterized Sigmoid Autoencoders

Authors: Yibo Jiang, Cengiz Pehlevan

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the predictions of our theoretical results in simulations. [...] 5. Simulations
Researcher Affiliation Academia 1John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA 2Center for Brain Science, Harvard University, Cambridge, MA, USA.
Pseudocode No The paper presents mathematical equations and derivations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: 'The code is implemented with Pytorch.' However, it does not provide any link or explicit statement about the availability of the source code for their methodology.
Open Datasets Yes 5.5. MNIST Data
Dataset Splits No The paper mentions 'Training is stopped when the training loss of the auto-encoder drops below a threshold' but does not specify any training/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility or hyperparameter tuning.
Hardware Specification Yes Only experiments that can be trained to have loss below 10 7 or fit into single Titan V GPU are included.
Software Dependencies No The paper states 'The code is implemented with Pytorch.' However, it does not provide specific version numbers for Pytorch or any other software dependencies, which are necessary for reproducibility.
Experiment Setup Yes Implementation details: We used vanilla gradient descent with learning rate 1, similar to (Jacot et al., 2018). The code is implemented with Pytorch. For each setting, we ran experiments 100 times to get 100 sets of samples. For all experiments except experiments on MNIST, the samples are randomly generated. Training is stopped when the training loss of the auto-encoder drops below a threshold, which we chose to be 10 7.