Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Deep Operator Learning Lessens the Curse of Dimensionality for PDEs

Authors: Ke Chen, Chunmei Wang, Haizhao Yang

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper provides an estimate for the generalization error of learning Lipschitz operators over Banach spaces using DNNs with applications to various PDE solution operators. The goal is to specify DNN width, depth, and the number of training samples needed to guarantee a certain testing error. Our work provides a theoretical explanation to why Co D is lessened in PDE operator learning. We extend the generalization theory in Liu et al. (2022) from Hilbert spaces to Banach spaces, and apply it to several PDE examples.
Researcher Affiliation Academia Ke Chen EMAIL Department of Mathematics University of Maryland, College Park; Chunmei Wang EMAIL Department of Mathematics University of Florida; Haizhao Yang EMAIL Department of Mathematics University of Maryland, College Park
Pseudocode No The paper describes mathematical frameworks and theorems (e.g., equations 1-5, Assumption 1-7, Theorems 1-5) but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing code or provide links to a code repository.
Open Datasets No The data set S = {(ui, vi), vi = Φ(ui)+εi, i = 1, . . . , 2n} is generated independently and identically distributed (i.i.d.) from a random measure γ over X. The paper does not provide concrete access information (link, DOI, repository) for any publicly available or open dataset used in its methodology or experiments. While it mentions the '2D Shepp-Logan phantom Gach et al. (2008)' as an example for an assumption, it is not used as an experimental dataset for which access information is provided within this paper.
Dataset Splits Yes The data set S is divided into Sn 1 = {(ui, vi), vi = Φ(ui) + εi, i = 1, . . . , n} that is used to train the encoder and decoders, and a training data set Sn 2 = {(ui, vi), vi = Φ(ui) + εi, i = n + 1, . . . , 2n}.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require specific hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper focuses on theoretical aspects and does not mention any specific software or library dependencies with version numbers.
Experiment Setup No As a theoretical paper, it does not describe an experimental setup with hyperparameters, training configurations, or system-level settings.