Universal Approximation of Input-Output Maps by Temporal Convolutional Nets

Authors: Joshua Hanson, Maxim Raginsky

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that TCNs can approximate a large class of input-output maps having approximately finite memory to arbitrary error tolerance. Furthermore, we derive quantitative approximation rates for deep Re LU TCNs in terms of the width and depth of the network and modulus of continuity of the original input-output map, and apply these results to input-output maps of systems that admit finite-dimensional state-space realizations (i.e., recurrent models).
Researcher Affiliation Academia Joshua Hanson University of Illinois Urbana, IL 61801 jmh4@illinois.edu Maxim Raginsky University of Illinois Urbana, IL 61801 maxim@illinois.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the described methodology.
Open Datasets No The paper is theoretical and does not involve training models on datasets, thus no dataset access information for training is provided.
Dataset Splits No The paper is theoretical and does not report on experimental data splits for validation.
Hardware Specification No The paper is theoretical and does not describe experiments, therefore no specific hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe experiments, therefore no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not describe experimental setups, hyperparameters, or training configurations.