CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural Representations

Authors: Peter Yichen Chen, Jinxu Xiang, Dong Heon Cho, Yue Chang, G A Pershing, Henrique Teles Maia, Maurizio M Chiaramonte, Kevin Thomas Carlberg, Eitan Grinspun

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach on an extensive range of PDEs with training data from voxel grids, meshes, and point clouds. Compared to prior discretization-dependent ROM methods, such as linear subspace proper orthogonal decomposition (POD) and nonlinear manifold neuralnetwork-based autoencoders, CROM features higher accuracy, lower memory consumption, dynamically adaptive resolutions, and applicability to any discretization. For equal latent space dimension, CROM exhibits 79 and 49 better accuracy, and 39 and 132 smaller memory footprint, than POD and autoencoder methods, respectively. Experiments demonstrate 109 and 89 wall-clock speedups over unreduced models on CPUs and GPUs, respectively.
Researcher Affiliation Collaboration 1Columbia University 2Meta Reality Labs Research 3MIT CSAIL 4University of Toronto
Pseudocode Yes Algorithm 1: Robust sampling; Algorithm 3: Latent-space Dynamics; Algorithm 4: Network inference: gather full-space information; Algorithm 5: Network inversion: optimally project the updated full-space information onto the low-dimensional embedding
Open Source Code Yes Videos and codes are available on the project page: https://crom-pde.github.io.
Open Datasets No The paper states it uses "training data from voxel grids, meshes, and point clouds" and describes how this data is generated via "full-order PDE solutions" in various appendices (e.g., G.4, H.4). However, it does not provide a specific link, DOI, repository, or formal citation for a publicly available or open dataset that was used as input for training. The data seems to be generated by the authors themselves for the experiments described.
Dataset Splits No For each PDE, we delineate a testing set where Dtest D with Dtrain Dtest = . We construct the manifold (Section 3) with data from Dtrain, and then validate the latent space dynamics (Section 4) on Dtest. We generate training data by setting ν to different piecewise constant functions of three regions... In total, 8 PDE temporal sequences (of 100 time steps) with different ν s are generated for training. We sample another 4 ν s for testing purposes. All training and testing data adopt the same initial condition.
Hardware Specification Yes CPU @ 2.30GHz; NVIDIA GeForce RTX 3080; NVIDIA Tesla V100; 64X Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Software Dependencies No We implement the entire training pipeline in Py Torch Lightning(Falcon et al., 2019) which facilitates distributed training across multiple GPUs. The full-order model and the reduced-order approach within the Py Torch framework (Paszke et al., 2019) without any other dependency.
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2014) for stochastic gradient descent. We use the Xavier initialization for ELU layers and the default initialization (ω0 = 30) for SIREN layers. Unless otherwise noted, we train with a base learning rate of lr = 1e-4 and adopt a learning rate decay strategy (10 lr 5 lr 2 lr 1 lr 0.5 lr 0.2 lr). For each aforementioned learning rate, we train for 30,000 epochs. We adopt a batch size 16 (i.e., 16 simulation snapshots) for the encoder and therefore a batch size of 16 P for the implicit-neural-representation-based manifoldparameterization function.