Learning differentiable solvers for systems with hard constraints

Authors: Geoffrey Négiar, Michael W. Mahoney, Aditi Krishnapriyan

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide empirical validation of our method on three problems representing different types of PDEs. Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
Researcher Affiliation Academia 1University of California, Berkeley 2Lawrence Berkeley National Laboratory 3International Computer Science Institute
Pseudocode No The paper describes its method in detail but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions using the JAX autodiff framework but does not provide any specific statement or link to the open-source code for their proposed methodology.
Open Datasets Yes The training set contains 1000 PDE parameters φ. ... We follow the data generation procedure from Li et al. (2020). ... We follow the data generation procedure from Li et al. (2020), which can be found here. ... The β(x) values are generated in the same manner as in Wang et al. (2021)...
Dataset Splits Yes The training set contains 1000 PDE parameters φ. The model is then evaluated on a separate test set with M = 50 PDE parameters φ that are not seen during training. ... During the training procedure for both hardand soft-constrained models, we track relative error on a validation set of PDE solutions with different PDE parameters from the training set.
Hardware Specification Yes We used a single Titan RTX GPU for each run in our experiments.
Software Dependencies No The paper mentions software like JAX, Chebfun, and scipy.optimize, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes The training set contains 1000 PDE parameters φ. ... We use a constant forcing function f equal to 1. ... We use N=600 for the number of basis functions in the PDE-CL. ... In practice, we sample 750 points for the PDE-CL, and sample a separate 250 points for computing the residual in the loss function. To ensure fairness, we sample 1000 points for the soft-constrained method, which are all used to compute the residual in the loss function.