NOMAD: Nonlinear Manifold Decoders for Operator Learning

Authors: Jacob Seidman, Georgios Kissas, Paris Perdikaris, George J. Pappas

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show this method is able to accurately learn low dimensional representations of solution manifolds to partial differential equations while outperforming linear models of larger size. Additionally, we compare to state-of-the-art operator learning methods on a complex fluid dynamics benchmark and achieve competitive performance with a significantly smaller model size and training cost. We begin our presentation in Section 2 by providing a taxonomy of representative works in the literature. In Section 3 we formally define the supervised operator learning problem and discuss existing approximation strategies, with a focus on highlighting open challenges and limitations. In Section 4 we present the main contributions of this work and illustrate their utility through the lens of a pedagogical example. In Section 5 we provide a comprehensive collection of experiments that demonstrate the performance of using NOMAD against competing state-of-the-art methods for operator learning.
Researcher Affiliation Academia Jacob H. Seidman Graduate Program in Applied Mathematics and Computational Science University of Pennsylvania seidj@sas.upenn.edu Georgios Kissas Department of Mechanical Engineering and Applied Mechanics University of Pennsylvania gkissas@seas.upenn.edu Paris Perdikaris Department of Mechanical Engineering and Applied Mechanics University of Pennsylvania pgp@seas.upenn.edu George J. Pappas Department of Electrical and Systems Engineering University of Pennsylvania pappasg@seas.upenn.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code and data accompanying this manuscript are available at https://github.com/PredictiveIntelligenceLab/NOMAD.
Open Datasets Yes The code and data accompanying this manuscript are available at https://github.com/PredictiveIntelligenceLab/NOMAD.
Dataset Splits No The paper mentions 'training data-set' and 'testing data-set' but does not specify validation splits or proportions in the main text. It states 'More details about architectures, hyperparameters settings, and training details are provided in the Supplemental Materials', implying this information might be outside the main paper.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper lists software used (JAX, Kymatio, Matplotlib, Pytorch, Num Py) with citations to their respective papers, but it does not provide specific version numbers for these software dependencies (e.g., 'PyTorch 1.9' instead of just 'Pytorch [38]').
Experiment Setup No More details about architectures, hyperparameters settings, and training details are provided in the Supplemental Materials.