Implicit Neural Spatial Representations for Time-dependent PDEs

Authors: Honglin Chen, Rundi Wu, Eitan Grinspun, Changxi Zheng, Peter Yichen Chen

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach on various PDEs with examples involving large elastic deformations, turbulent fluids, and multi-scale phenomena. While slower to compute than traditional representations, our approach exhibits higher accuracy and lower memory consumption.
Researcher Affiliation Academia 1Department of Computer Science, Columbia University, New York, USA 2Department of Computer Science, University of Toronto, Toronto, Canada 3Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, USA.
Pseudocode Yes Algorithm 1 Time integration
Open Source Code Yes Videos and codes are available on the project page.1 1Project webpage: http://www.cs.columbia.edu/cg/INSR-PDE/
Open Datasets No The paper's experiments involve solving PDEs with analytical ground truths or high-resolution reference solutions (e.g., 1D advection, 2D Taylor-Green vortex, elasticity problems). It does not refer to or provide access information for traditional, pre-existing publicly available datasets used for training in a machine learning sense.
Dataset Splits No The paper describes a PDE solver, not a model trained with explicit train/validation/test dataset splits. It evaluates the solver's accuracy by comparing its results to ground truth or reference solutions, which is a different evaluation paradigm.
Hardware Specification Yes We implemented our method using the PyTorch library and performed experiments on an NVIDIA GeForce RTX 3090 GPU.
Software Dependencies No The paper mentions 'PyTorch library', 'Adam' (optimizer), 'Bartels', and 'Taichi' but does not provide specific version numbers for these software components.
Experiment Setup Yes We set an initial learning rate lr0 and reduce it by a factor of 0.1 if the loss value does not decrease for iterp iterations. We stop the optimization process when the learning rate is lower than lrmin or until it reaches a maximum of itermax iterations. Specific values of these hyper-parameters are described for each example below. We further report all the parameters and experiment setup in Table 6.