M2N: Mesh Movement Networks for PDE Solvers

Authors: Wenbin Song, Mingrui Zhang, Joseph G Wallwork, Junpeng Gao, Zheng Tian, Fanglei Sun, Matthew Piggott, Junqing Chen, Zuoqiang Shi, Xiang Chen, Jun Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our methods on stationary and time-dependent, linear and non-linear equations, as well as regularly and irregularly shaped domains. Compared to the traditional Monge-Ampère method, our approach can greatly accelerate the mesh adaptation process by three to four orders of magnitude, whilst achieving comparable numerical error reduction.
Researcher Affiliation Collaboration Wenbin Song Shanghai Tech University songwb@shanghaitech.edu.cn Mingrui Zhang Imperial College London mingrui.zhang18@imperial.ac.uk Joseph G. Wallwork Imperial College London j.wallwork16@imperial.ac.uk Junpeng Gao ETH Zürich jungao@student.ethz.ch Zheng Tian Shanghai Tech University zheng.tian.11@ucl.ac.uk Fanglei Sun Shanghai Tech University sunfl@shanghaitech.edu.cn Matthew D. Piggott Imperial College London m.d.piggott@imperial.ac.uk Junqing Chen Tsinghua University jqchen@tsinghua.edu.cn Zuoqiang Shi Tsinghua University zqshi@tsinghua.edu.cn Xiang Chen Noah s Ark Lab, Huawei xiangchen.ai@outlook.com Jun Wang University College London jun.wang@cs.ucl.ac.uk
Pseudocode No The paper describes the models with text and diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes All our code is implemented in PyTorch and is publicly available at https://github.com/wb-song/m2n.
Open Datasets No For both the square and heptagonal domain experiments, we generate analytical u samples from a mixed Gaussian distribution, which are fed into Poisson s equation to obtain the corresponding source terms f and boundary conditions u0 as the problem samples, whereas the u functions serve as the ground truth. ... The models are trained at mesh densities of 13, 16, 19, and 22, each with 320 samples, and tested on mesh densities from 12 to 23, each with 80 samples, to evaluate the performance and generalization capability of the models.
Dataset Splits No The paper explicitly describes training and testing sets but does not mention a distinct validation set being used during training for hyperparameter tuning or early stopping.
Hardware Specification Yes All experiments are conducted on an NVIDIA GeForce RTX 3090 GPU, with 24GB memory.
Software Dependencies No All our code is implemented in PyTorch and is publicly available at https://github.com/wb-song/m2n. ... The initial mesh T init is generated with the Delaunay triangulation method provided by Gmsh [Geuzaine and Remacle, 2009].
Experiment Setup Yes We adopt Adam optimizer for training with learning rate 1e-4 and batch size 64. The training is run for 200 epochs.