What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning

Authors: Caleb Schultz Kisby, Saúl A. Blanco, Lawrence S. Moss

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper is a contribution to neural network semantics, a foundational framework for neuro-symbolic AI. The key insight of this theory is that logical operators can be mapped to operators on neural network states. In this paper, we do this for a neural network learning operator. We map a dynamic operator [φ] to iterated Hebbian learning, a simple learning policy that updates a neural network by repeatedly applying Hebb s learning rule until the net reaches a fixed-point. Our main result is that we can translate away [φ]-formulas via reduction axioms. This means that completeness for the logic of iterated Hebbian learning follows from completeness of the base logic. These reduction axioms also provide (1) a humaninterpretable description of iterated Hebbian learning as a kind of plausibility upgrade, and (2) an approach to building neural networks with guarantees on what they can learn.
Researcher Affiliation Academia Caleb Schultz Kisby1, Sa ul A. Blanco1, Lawrence S. Moss2 1Department of Computer Science, Indiana University 2Department of Mathematics, Indiana University Bloomington, IN 47408, USA
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The proofs of our main theorem and its major supporting lemmas have been verified using the Lean 4 interactive theorem prover (Moura and Ullrich 2021). The code and installation instructions are available at https://github.com/ais-climber/AAAI2024
Open Datasets No This is a theoretical paper focused on logical formalisms and proofs, not empirical experiments. Therefore, it does not involve training models on datasets.
Dataset Splits No This is a theoretical paper focused on logical formalisms and proofs, not empirical experiments involving dataset splits for validation.
Hardware Specification No The paper is theoretical and focuses on logical proofs and formal systems. It does not describe any empirical experiments or the specific hardware used for computations or proof verification.
Software Dependencies Yes The proofs of our main theorem and its major supporting lemmas have been verified using the Lean 4 interactive theorem prover (Moura and Ullrich 2021).
Experiment Setup No The paper is theoretical and focuses on logical formalisms and proofs. It does not describe an experimental setup with hyperparameters, training configurations, or system-level settings.