Globally injective and bijective neural operators

Authors: Takashi Furuya, Michael Puthawala, Matti Lassas, Maarten V. de Hoop

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work we present results for when the operators learned by these networks are injective and surjective. As a warmup, we combine prior work in both the finite-dimensional Re LU and operator learning setting by giving sharp conditions under which Re LU layers with linear neural operators are injective. We then consider the case when the activation function is pointwise bijective and obtain sufficient conditions for the layer to be injective. We remark that this question, while trivial in the finite-rank setting, is subtler in the infinite-rank setting and is proven using tools from Fredholm theory. Next, we prove that our supplied injective neural operators are universal approximators and that their implementation, with finite-rank neural networks, are still injective.
Researcher Affiliation Academia 1Shimane University, takashi.furuya0101@gmail.com 2South Dakota State University, Michael.Puthawala@sdstate.edu 3University of Helsinki, matti.lassas@helsinki.fi 4Rice University, mdehoop@rice.edu
Pseudocode No The paper contains mathematical derivations, theorems, and proofs, but no structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code, such as a repository link or an explicit code release statement.
Open Datasets No As this is a theoretical paper, it does not use or reference any publicly available datasets for training or evaluation.
Dataset Splits No As this is a theoretical paper, it does not specify any training/test/validation dataset splits.
Hardware Specification No As this is a theoretical paper, it does not describe any specific hardware used for experiments.
Software Dependencies No As this is a theoretical paper, it does not list any specific software dependencies with version numbers related to experimental setup.
Experiment Setup No As this is a theoretical paper, it does not provide details about an experimental setup, such as hyperparameters or training settings.