(Non-)Convergence Results for Predictive Coding Networks

Authors: Simon Frieder, Thomas Lukasiewicz

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we use dynamical systems theory to formally investigate the convergence of PCNs as they are used in machine learning. Doing so, we put their theory on a firm, rigorous basis, by developing a precise mathematical framework for PCN and show that for sufficiently small weights and initializations, PCNs converge for any input.
Researcher Affiliation Academia 1Department of Computer Science, University of Oxford, UK. 2Institute of Logic and Computation, TU Wien, Austria.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements or links indicating the release of open-source code for the described methodology.
Open Datasets No The paper describes theoretical analysis and mathematical proofs, not empirical training on a dataset. Although it mentions 'a dataset consisting of a single training example' in the context of theoretical training stage analysis, it does not use a publicly available dataset for empirical evaluation.
Dataset Splits No The paper describes theoretical analysis and mathematical proofs, and does not involve empirical experiments requiring dataset splits for validation.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper discusses mathematical parameters and conditions for convergence (e.g., 'sufficiently small weights and initializations', 'step size in γ (0, 1)') but these are part of the theoretical analysis, not a description of an empirical experimental setup with hyperparameters for a runnable system.