Predictive Attractor Models

Authors: Ramy Mounir, Sudeep Sarkar

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments
Researcher Affiliation Academia Department of Computer Science and Engineering, University of South Florida, Tampa {ramy, sarkar}@usf.edu
Pseudocode Yes Algorithm 1 : Sequence Learning. ... Algorithm 2 : Sequence Generation.
Open Source Code Yes Illustration videos and code are available on our project page: https://ramymounir.com/publications/pam.
Open Datasets Yes Datasets We perform evaluations on synthetic and real datasets. ... Additionally, we evaluate on real datasets of various types (e.g., protein sequences, text, vision)... Protein Net [7]... Moving MNIST [60], CLEVRER [68] as well as synthetically generated sequences of CIFAR [33] images.
Dataset Splits No The paper does not explicitly state training/validation/test dataset splits, only mentions training and testing.
Hardware Specification No PAM operates entirely on CPU. ... We specify the compute resources required for PAM (i.e., CPU) and plot a comparison of the time required by each method at different parameters in Figure 3 D. The paper only specifies 'CPU' without mentioning specific models or types.
Software Dependencies No The paper mentions 'Adam optimizer' but does not specify any software names with version numbers for libraries or programming languages.
Experiment Setup Yes D Implementation Details: For each model, we optimize a single set of hyperparameters for all the experiments. ... All η+ values in Equations 7 & 8 are set to 0.1. η B is set to 0.1, while η A is set to 0.0... The threshold for the δ function is set as a function of the SDR sparsity. For the transition function, we use a threshold of 0.8W... For the emission function, we use a threshold of 0.1W... For the t PC architecture, we use learning rate of 1e-4 for 800 learning iterations.