A Unified Hard-Constraint Framework for Solving Geometrically Complex PDEs
Authors: Songming Liu, Hao Zhongkai, Chengyang Ying, Hang Su, Jun Zhu, Ze Cheng
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real-world geometrically complex PDEs showcase the effectiveness of our method compared with state-of-the-art baselines. We empirically demonstrate the effectiveness of our method through three parts of experiments. |
| Researcher Affiliation | Collaboration | 1Dept. of Comp. Sci. and Tech., Institute for AI, THBI Lab, BNRist Center, Tsinghua-Bosch Joint ML Center, Tsinghua University 2Peng Cheng Laboratory; Pazhou Laboratory (Huangpu), Guangzhou, China 3Bosch Center for Artificial Intelligence |
| Pseudocode | No | The paper describes its method using equations and textual descriptions but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplementary material. |
| Open Datasets | Yes | the 2D stationary incompressible Navier-Stokes equations, in the context of simulating the airflow around a real-world airfoil (w1015.dat) from the UIUC airfoil coordinates database (an open airfoil database) [29]. |
| Dataset Splits | No | The paper specifies the number of collocation points for training and testing, but does not explicitly mention a separate validation set or split percentages. |
| Hardware Specification | Yes | The total amount of compute is around 50 GPU hours with NVIDIA V100 GPU. |
| Software Dependencies | No | The paper states that the method is 'implemented in PyTorch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use Adam optimizer with a learning rate of 1e-3, and then use L-BFGS optimizer. The learning rate of Adam is decayed using cosine annealing schedule [29] (with a warm up of 1000 iterations). In each experiment, we sample Nf collocation points, Nb boundary points, and Ni initial points. |