Geometry-Informed Neural Operator for Large-Scale 3D PDEs
Authors: Zongyi Li, Nikola Kovachki, Chris Choy, Boyi Li, Jean Kossaifi, Shourya Otta, Mohammad Amin Nabian, Maximilian Stadler, Christian Hundt, Kamyar Azizzadenesheli, Animashree Anandkumar
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To empirically validate the performance of our method on large-scale simulation, we generate the industry-standard aerodynamics dataset of 3D vehicle geometries with Reynolds numbers as high as five million. We explore a range of models on two CFD datasets. The large-scale Ahmed-Body dataset, which we generated, and also the Shape-Net Car dataset from [16]. We generate the industry-level vehicle aerodynamics simulation based on the Ahmed-body shapes [28]. As shown in Table 2 3 and Figure 2, GINO achieves the best error rate with a large margin compared with previous methods. |
| Researcher Affiliation | Collaboration | Zongyi Li Nikola Borislavov Kovachki Chris Choy Boyi Li Jean Kossaifi Shourya Prakash Otta Mohammad Amin Nabian Maximilian Stadler Christian Hundt Kamyar Azizzadenesheli Anima Anandkumar ZL is supported by the Nvidia fellowship. NBK is grateful to the NVIDIA Corporation for support through full-time employment. |
| Pseudocode | No | The paper describes the architecture and components, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not explicitly state that source code for their method is released or provide a link to a repository. |
| Open Datasets | Yes | We also consider the Car dataset generated by [16]. |
| Dataset Splits | Yes | We generate 551 shapes in total and divide them into 500 for training and 51 for validation. We take the 611 water-tight shapes out of the 889 instances, and divide the 611 instances into 500 for training and 111 for validation. |
| Hardware Specification | Yes | Each simulation takes 7-19 hours on 16 CPU cores and 2 Nvidia V100 GPUs. |
| Software Dependencies | No | The paper mentions OpenFOAM, Open3D, torch-scatter, and PyTorch Geometric but does not specify their version numbers. |
| Experiment Setup | Yes | We train each model for 100 epochs with Adam optimizer and step learning rate scheduler. The implementation details can be found in the Appendix. All models are trained using the Adam optimizer for 100 epochs, with the learning rate halved at the 50th epoch. We consider starting learning rates such as [0.002, 0.001, 0.0005, 0.00025, 0.0001], with the most favorable results attained at the rates 0.00025 and 0.0001. For the GINO model, we consider channel dimensions [32, 48, 64, 80], latent space [32, 48, 64, 80], and radius from 0.025 to 0.055 (with the domain size normalized to [-1, 1]). |