Initialized Equilibrium Propagation for Backprop-Free Training

Authors: Peter O'Connor, Efstratios Gavves, Max Welling

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that this network appears to work as well or better than the original version of Equilibrium propagation while requiring fewer steps to converge. We verify that the our algorithm works on the MNIST dataset. The learning curves can be seen in Figure 2.
Researcher Affiliation Academia Peter O Connor, Efstratios Gavves, Max Welling QUVA Lab University of Amsterdam Amsterdam, Netherlands peter.ed.oconnor@gmail.com,egavves@uva.nl,m.welling@uva.nl
Pseudocode Yes Algorithm 1 Training; Algorithm 2 Feedforward Inference; Algorithm 3 Iterative Inference
Open Source Code Yes 0Code available at https://github.com/QUVA-Lab/init-eqprop
Open Datasets Yes We verify that the our algorithm works on the MNIST dataset.
Dataset Splits No The paper does not explicitly provide specific train/validation/test dataset splits with percentages or sample counts. It mentions using 'minibatch' but no clear split methodology.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific computing environments) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names with versions like PyTorch 1.9, TensorFlow 2.0, or scikit-learn 0.24).
Experiment Setup Yes Input: Dataset (x, y), Step Size ϵ, Learning Rate η, Network Architecture α, Number of negative-phase steps T , Number of positive-phase steps T + and We use λ = 0.1 as the regularizing parameter from Equation 10