Sequence Learning Using Equilibrium Propagation

Authors: Malyaban Bal, Abhronil Sengupta

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments reported in this section, we focus on benchmarking with models that are trained using BP such as LSTMs, GRUs, etc. that can be potentially implemented in a neuromorphic setting. and Table 1: Comparing our models with other models trained using BP on the IMDB & SNLI datasets.
Researcher Affiliation Academia Malyaban Bal , Abhronil Sengupta School of Electrical Engineering and Computer Science The Pennsylvania State University {mjb7906, sengupta}@psu.edu
Pseudocode No No pseudocode or clearly labeled algorithm block was found. The paper describes procedures using text and mathematical equations.
Open Source Code Yes Our implementation source code is available at https://github.com/ Neuro Comp Lab-psu/Eq Prop-Seq Learning.
Open Datasets Yes For testing our proposed work on sentiment analysis problems, we chose the IMDB Dataset and for NLI problems, we chose the Stanford Natural Language Inference (SNLI) dataset. and citations [Maas et al., 2011] and [Bowman et al., 2015].
Dataset Splits No IMDB dataset comprises of 50K reviews, 25K for training and 25K for testing. (This only specifies train/test, not validation explicitly or implicitly through citation of a standard split including validation.)
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts) used for experiments were mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers were mentioned in the paper.
Experiment Setup Yes Table 2: Hyper-parameters & Perf. Metrics for IMDB dataset. (Includes Optimal Influence Factor (β), T ( Free Phase ), K ( Nudge Phase ), Epochs, Layers (Linear & FC), Layer-wise lr, Batch Size). and Table 3: Hyper-parameters & Perf. Metrics for SNLI dataset. (Includes similar hyperparameters).