Simplicial Hopfield networks

Authors: Thomas F Burns, Tomoki Fukai

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically show improved performance under parameter constraints. By restricting the total number of connections to that of pairwise Hopfield networks with a mixture of pairwise and setwise connections, we show simplicial Hopfield networks retain a surprising amount of improved performance over pairwise networks but with fewer parameters, and are robust to topological variability. We tested the performance of our simplicial Hopfield networks by embedding data from the MNIST (Le Cun et al., 2010), CIFAR-10 (Krizhevsky & Hinton, 2009), and Tiny Image Net (Le & Yang, 2015) datasets as memories.
Researcher Affiliation Academia Thomas F. Burns Neural Coding and Brain Computing Unit OIST Graduate University, Okinawa, Japan thomas.burns@oist.jp Tomoki Fukai Neural Coding and Brain Computing Unit OIST Graduate University, Okinawa, Japan tomoki.fukai@oist.jp
Pseudocode No The paper provides mathematical equations for energy and update rules, but no explicit pseudocode or algorithm blocks are included.
Open Source Code Yes To reproduce our results in the main text and appendices, we provide our Python code as supplementary material at https://github.com/tfburns/simplicial-hopfield-networks.
Open Datasets Yes We tested the performance of our simplicial Hopfield networks by embedding data from the MNIST (Le Cun et al., 2010), CIFAR-10 (Krizhevsky & Hinton, 2009), and Tiny Image Net (Le & Yang, 2015) datasets as memories.
Dataset Splits No The paper mentions using MNIST, CIFAR-10, and Tiny Image Net datasets. However, it does not explicitly specify train/validation/test splits in terms of percentages or sample counts. It describes embedding memories and testing recall, but not in the typical supervised learning split context.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. It only mentions running "numerical simulations."
Software Dependencies No The paper states that Python code is provided for reproducibility but does not list specific versions for Python or any key libraries or software dependencies used in the experiments (e.g., NumPy, PyTorch, etc.).
Experiment Setup Yes In our numerical simulations, we perform updates synchronously until E is non-decreasing or until a maximum number of steps is reached, whichever comes first. In all tests, we used T 1 = 100. We initialise S as one of the memory patterns corrupted by Gaussian noise with variance 0.5. To inspect changes in the energy landscapes of different network conditions, we set N = 10 and P = 10 random patterns. Correct recall was defined as a sum of the squared difference being < 50.