Learning Neural Constitutive Laws from Motion Observations for Generalizable PDE Dynamics

Authors: Pingchuan Ma, Peter Yichen Chen, Bolei Deng, Joshua B. Tenenbaum, Tao Du, Chuang Gan, Wojciech Matusik

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first show our experimental setup (Sec. 4.1). We then compare our methods against baselines and oracles in reconstruction (Sec. 4.2) and generalization (Sec. 4.3). We further study our method in two advanced experiments: the multi-physics environments (Sec. 4.5) and a real-world experiment (Sec. 4.6).
Researcher Affiliation Collaboration 1MIT CSAIL 2MIT BCS 3Center for Brains, Minds and Machines 4Tsinghua University 5Shanghai Qi Zhi Institute 6MIT-IBM Watson AI Lab 7UMass Amherst. Correspondence to: Pingchuan Ma <pcma@csail.mit.edu>, Peter Yichen Chen <pyc@csail.mit.edu>.
Pseudocode Yes Algorithm 1 Time-stepping, classic, Algorithm 2 Time-stepping, neural, Algorithm 3 MPM Algorithm, Algorithm 4 Neural Constitutive Laws
Open Source Code No The paper links to a project website (https://sites.google.com/view/nclaw) which states 'the results are best illustrated in videos' and does not explicitly state that the source code for the methodology is available there. There is no unambiguous sentence confirming code release or a direct link to a code repository.
Open Datasets No The paper describes generating its own training datasets for specific materials (JELL-O, SAND, PLASTICINE, WATER) and states, 'We generate one single trajectory for each environment as the training dataset.' However, it does not provide a direct link, DOI, repository name, or formal citation for public access to these generated datasets.
Dataset Splits No The paper trains on 'a single trajectory' for each environment and then evaluates on 'out-of-distribution conditions' for generalization. It does not specify explicit dataset splits (e.g., percentages or sample counts) for training, validation, or testing from a single shared dataset.
Hardware Specification Yes We train all our experiments on one NVIDIA RTX A6000.
Software Dependencies No The paper mentions using 'Warp (Macklin, 2022)' and 'Py Torch (Paszke et al., 2019)' but does not provide specific version numbers for these software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes Our neural constitutive laws contain two neural networks with equal sizes each for elasticity and plasticity, a total of 11,008 parameters. The neural networks use GELU (Hendrycks & Gimpel, 2016) as non-linearity and contain no normalization layers. We use the Adam optimizer (Kingma & Ba, 2014) with learning rates of 1.0 and 0.1 for elasticity and plasticity, respectively. We train the neural constitutive laws for 300 epochs and decay the learning rates of both elasticity and plasticity using a cosine annealing scheduler. We also clip the norm of gradients to a maximum of 0.1. We also utilize a teacher-forcing scheme that restarts from the ground-truth position periodically. We increase the period of teacher forcing from 25 to 200 steps by a cosine annealing scheduler.