Harnessing the Power of Neural Operators with Automatically Encoded Conservation Laws
Authors: Ning Liu, Yiming Fan, Xianyi Zeng, Milan Klöwer, Lu Zhang, Yue Yu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As demonstrations, we consider a wide variety of scientific applications ranging from constitutive modeling of material deformation, incompressible fluid dynamics, to atmospheric simulation. Claw NOs significantly outperform the state-of-the-art NOs in learning efficacy, especially in small-data regimes. Our code and data accompanying this paper are available at https: //github.com/ningliu-iga/claw NO. We showcase the prediction accuracy and expressivity of claw NOs across a wide range of scientific problems, including elasticity, shallow water equations and incompressible Navier-Stokes equations. We compare the performance of claw NOs against a number of relevant machine learning techniques. |
| Researcher Affiliation | Collaboration | 1Global Engineering and Materials, Inc., Princeton, NJ 08540, USA 2Department of Mathematics, Lehigh University, Bethlehem, PA 18015, USA 3Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data accompanying this paper are available at https: //github.com/ningliu-iga/claw NO. |
| Open Datasets | Yes | Our code and data accompanying this paper are available at https: //github.com/ningliu-iga/claw NO. |
| Dataset Splits | Yes | We then split the generated dataset into 1,000, 100 and 100 for training, validation and testing, respectively. |
| Hardware Specification | Yes | All the experiments are carried out on a single NVIDIA A6000 40GB GPU. |
| Software Dependencies | No | The paper mentions software like 'Speedy Weather.jl' and 'pseudo-spectral Crank-Nicolson solver' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We set the latent dimension in FNO and claw FNO to 20, whereas we counterbalance the additional dimensions introduced due to equivariance in GFNO and claw GFNO by reducing the latent dimension to 10 in all the 2D models and 11 in all the 3D models. [...] We set the batch size to 20 for all 2D models and 10 for all 3D models, with the exception in small and medium data regimes, where we set batch size to 2 and 1 when the training datasets are of size 10 and 2, respectively. We employ cosine annealing learning rate scheduler that decays the initial learning rate to 0. All the 2D models are trained for a total of 100 epochs whereas all the 3D models are trained for 500 epochs with an early stop if the validation loss stops improving for consecutive 100 epochs. |