PDE-Net: Learning PDEs from Data
Authors: Zichao Long, Yiping Lu, Xianzhong Ma, Bin Dong
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment. and 3. Numerical Studies: Convection-Diffusion Equations and 3.2. Results and Discussions |
| Researcher Affiliation | Academia | 1School of Mathematical Sciences, Peking University, Beijing, China 2Beijing Computational Science Research Center, Beijing, China 3Beijing International Center for Mathematical Research, Peking University, Beijing, China 4Center for Data Science, Peking University 5Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research. |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | PyTorch codes of the PDE-Net are available at https://github.com/ZichaoLong/PDE-Net. |
| Open Datasets | No | Data is generated by solving problem (8) using a high precision numerical scheme by discretizing Ωusing a 50 × 50 grid and a time step size δt = 0.015. and Details on the data generation and experiments on noise-free case can be found in the supplement (Long et al., 2018). |
| Dataset Splits | No | Consider the data set {uj(ti, ) : i, j = 0, 1, . . .}, where j indicates the j-th solution path with a certain initial condition of the unknown dynamics. We would like to train the PDE-Net with n δt-blocks. For a given n ≥ 1, every pair of the data {uj(ti, ), uj(ti+n, )}, for each i and j, is a training sample, where uj(ti, ) is the input and uj(ti+n, ) is the label that we need to match with the output from the PDE-Net. and we randomly generate 560 initial guesses...and measure the normalized error between the predicted dynamics (i.e. the output of the PDE-Net) and the actual dynamics... among 560 test samples. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., specific CPU or GPU models, memory, or cloud instance types). |
| Software Dependencies | No | PyTorch codes of the PDE-Net are available at https://github.com/ZichaoLong/PDE-Net. (PyTorch is mentioned, but without a specific version number. No other specific software versions are listed.) |
| Experiment Setup | Yes | The size of the filters that will be used is 5 × 5 or 7 × 7. and During training, we use LBFGS, instead of SGD, to optimize the parameters. We use 28 data samples per batch to train each layer (i.e. δt-block) and we only construct the PDE-Net up to 20 layers, which requires totally 560 data samples per batch. and For the filters, we initialize them by freezing them to their corresponding differential operators. |