Dense Associative Memory for Pattern Recognition
Authors: Dmitry Krotov, John J. Hopfield
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set. The performance of the proposed classification framework is studied as a function of the power n. The test errors as training progresses are shown in Fig.1B. |
| Researcher Affiliation | Academia | Dmitry Krotov Simons Center for Systems Biology Institute for Advanced Study Princeton, USA krotov@ias.edu John J. Hopfield Princeton Neuroscience Institute Princeton University Princeton, USA hopfield@princeton.edu |
| Pseudocode | No | No explicitly labeled 'Pseudocode' or 'Algorithm' sections, nor structured code-like blocks were found. |
| Open Source Code | No | The paper does not contain any statement about making the source code available or provide a link to a code repository. |
| Open Datasets | Yes | The MNIST data set is a collection of handwritten digits, which has 60000 training examples and 10000 test images. |
| Dataset Splits | No | The paper refers to a 'validation set' (see the Appendix A in Supplemental) but does not provide explicit split percentages or sample counts in the main text. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running experiments are provided. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., library or solver names with versions) are provided. |
| Experiment Setup | No | The paper mentions aspects of the experimental setup such as initialization, activation function choice, optimization method (backpropagation), and memory normalization. It also mentions that hyperparameters were optimized using a validation set, with details in an appendix, but does not provide specific hyperparameter values (e.g., learning rate, batch size) in the main text. |