DiffusionPDE: Generative PDE-Solving under Partial Observation

Authors: Jiahe Huang, Guandao Yang, Zichen Wang, Jeong Joon Park

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to show the versatility of Diffusion PDE as a general PDE-solving framework. We evaluate it on a diverse set of static and temporal PDEs, including Darcy Flow, Poisson, Helmholtz, Burger s, and Navier-Stokes equations. Diffusion PDE significantly outperforms existing state-of-the-art learning-based methods for solving PDEs [3 6, 8] in both forward and inverse directions with sparse measurements, while achieving comparable results with full observations.
Researcher Affiliation Academia Jiahe Huang1 Guandao Yang2 Zichen Wang1 Jeong Joon Park1 1University of Michigan 2Stanford University
Pseudocode Yes Algorithm 1 Sparse Observation and PDE Guided Diffusion Sampling Algorithm. 1: input Deterministic Sampler Dθ(x; σ), σ(ti {0,...,N}), Total Point Count m, Observed Point Count n, Observation y, PDEFunction f, Weights ζobs, ζpde 2: sample x0 N 0, σ(t0)2I Generate initial sampling noise 3: for i {0. . . , N 1} do 4: ˆxi N Dθ (xi; σ(ti)) Estimate the denoised data at step ti 5: di xi ˆxi N /σ(ti) Evaluate dx/dσ(t) at step ti 6: xi+1 xi + (σ(ti+1) σ(ti))di Take an Euler step from σ(ti) to σ(ti+1) 7: if σ(ti+1) = 0 then 8: ˆxi N Dθ(xi+1; σ(ti+1)) Apply 2nd order correlation unless σ = 0 9: d i xi+1 ˆxi N /σ(ti+1) Evaluate dx/dσ(t) at step ti+1 10: xi+1 xi + (σ(ti+1) σ(ti)) 1 2d i Apply the trapezoidal rule at step ti+1 11: end if 12: Lobs 1 n y ˆxi N 2 2 Evaluate the observation loss of ˆxi N 13: Lpde 1 m 0 f(ˆxi N) 2 2 Evaluate the PDE loss of ˆxi N 14: xi+1 xi+1 ζobs xi Lobs ζpde xi Lpde Guide the sampling with Lobs and Lpde 15: end for 16: return x N Return the denoised data
Open Source Code Yes See our project page for results: jhhuangchloe.github.io/Diffusion-PDE/.
Open Datasets Yes We utilize Finite Element Methods (FEM) to generate our training data. Specifically, we run FNO s [3] released scripts to generate Darcy Flows and the vorticities of the Navier-Stokes equation. Similarly, we generate the dataset of Poisson and Helmholtz using second-order finite difference schemes. To add more complex boundary conditions, we use Difftaichi [57] to generate the velocities of the bounded Navier-Stokes equation. We train the joint diffusion model for each PDE on three A40 GPUs for approximately 4 hours, using 50,000 data pairs. For Burgers equation, we train the diffusion model on a dataset of 50,000 samples produced as outlined in FNO [3].
Dataset Splits No The paper does not explicitly mention the use of a validation set or specific training/validation/test splits.
Hardware Specification Yes We train the joint diffusion model for each PDE on three A40 GPUs for approximately 4 hours, using 50,000 data pairs. We evaluate the computing cost during the inference stage by testing a single data point on a single A40 GPU for the Navier-Stokes equation.
Software Dependencies No The paper mentions using FNO's released scripts [3] and Difftaichi [57], which are likely Python-based, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We train the joint diffusion model for each PDE on three A40 GPUs for approximately 4 hours, using 50,000 data pairs. We find that Diffusion PDE performs the best when weights ζ are selected as shown in Table 2. During the initial 80% of iterations in the sampling process, guidance is exclusively provided by the observation loss Lobs. Subsequently, after 80% of the iterations have been completed, we introduce the PDE loss Lpde, and reduce the weighting factor ζobs for the observation loss, by a factor of 10. This adjustment shifts the primary guiding influence to the PDE loss, thereby aligning the diffusion model more closely with the dynamics governed by the partial differential equations.