Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Do ReLU Networks Have An Edge When Approximating Compactly-Supported Functions?
Authors: Anastasis Kratsios, Behnoosh Zamanlooy
TMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our first main result transcribes this structured approximation problem into a universality problem. We do this by constructing a refinement of the usual topology on the space L1loc(Rd,RD) of locally-integrable functions in which compactly-supported functions can only be approximated in L1-norm by functions with matching discretized support. We establish the universality of Re LU feedforward networks with bilinear pooling layers in this refined topology. Consequentially, we find that Re LU feedforward networks with bilinear pooling can approximate compactly supported functions while implementing their discretized support. We derive a quantitative uniform version of our universal approximation theorem on the dense subclass of compactly-supported Lipschitz functions. |
| Researcher Affiliation | Academia | Anastasis Kratsios EMAIL Department of Mathematics Mc Master University 1280 Main Street West, Hamilton, Ontario, L8S 4K1, Canada Behnoosh Zamanlooy EMAIL Department of Computing and Software Mc Master University 1280 Main Street West, Hamilton, Ontario, L8S 4K1, Canada |
| Pseudocode | No | The paper is highly theoretical, focusing on mathematical definitions, theorems, and proofs related to the approximation capabilities of neural networks. It does not include any structured pseudocode or algorithm blocks describing a computational procedure. |
| Open Source Code | No | The paper is theoretical and focuses on mathematical proofs and universal approximation theorems. There are no mentions of any source code, code repositories, or supplementary materials containing code for the methodology described. |
| Open Datasets | No | This paper presents theoretical research on the approximation capabilities of neural networks for compactly-supported functions. It does not conduct experiments using specific datasets, and therefore no information about open or publicly available datasets is provided. |
| Dataset Splits | No | The paper is purely theoretical and does not involve experimental evaluation on datasets. Therefore, there is no mention of dataset splits for training, validation, or testing. |
| Hardware Specification | No | This paper is theoretical in nature, focusing on mathematical proofs and properties of neural networks. It does not describe any computational experiments, and thus no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any practical implementation or experimental setup. Consequently, it does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is a theoretical work on universal approximation theorems and does not describe any experimental setup, hyperparameters, or training configurations for models. |