Turing Completeness of Bounded-Precision Recurrent Neural Networks

Authors: Stephen Chung, Hava Siegelmann

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine s time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and boundedprecision RNNs, revisiting and extending the theoretical foundations of RNNs.
Researcher Affiliation Academia Stephen Chung Department of Computer Science University of Massachusetts Amherst Amherst, MA 01003 minghaychung@umass.edu Hava Siegelmann Department of Computer Science University of Massachusetts Amherst Amherst, MA 01003 hava@umass.edu
Pseudocode No The paper provides mathematical formulations and descriptions of processes, but no structured pseudocode or algorithm blocks are present.
Open Source Code No The paper is theoretical and does not mention releasing any source code or provide links to a code repository. The discussion section mentions that 'Since growing memory modules are not differentiable, we cannot train them directly by the frequently used error backpropagation algorithm.'
Open Datasets No The paper is theoretical and does not describe empirical experiments involving datasets or training.
Dataset Splits No The paper is theoretical and does not describe empirical experiments involving dataset splits.
Hardware Specification No The paper is theoretical and does not describe any computational experiments that would require specific hardware.
Software Dependencies No The paper is theoretical and does not describe any computational experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup, hyperparameters, or training configurations.