Accelerated Training of Physics-Informed Neural Networks (PINNs) using Meshless Discretizations
Authors: Ramansh Sharma, Varun Shankar
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficiency and accuracy of DT-PINNs via a series of experiments. |
| Researcher Affiliation | Academia | Ramansh Sharma Department of Computer Science and Engineering, SRM Institute of Science and Technology, India rs7146@srmist.edu.in Varun Shankar School of Computing, University of Utah, UT, USA shankar@cs.utah.edu |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | We release the datasets and codebase we used as part of the supplementary material. |
| Open Datasets | No | No specific link, DOI, repository name, or formal citation for a publicly available or open dataset was provided. The paper describes the generation of quasi-uniform collocation points and the use of manufactured solutions for its experiments. |
| Dataset Splits | No | The paper mentions 'training points' (collocation points) and a 'test set', but does not explicitly describe a validation set or the specific percentages/counts for training, validation, and test splits needed for reproduction. |
| Hardware Specification | Yes | All experiments were run for 5000 epochs on an NVIDIA Ge Force RTX 2070. |
| Software Dependencies | No | The paper mentions using PyTorch and CuPy in its references but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | All experiments were run for 5000 epochs on an NVIDIA Ge Force RTX 2070. All results are reproducible with the seeds we used in the experiments. We used the L-BFGS optimizer with manually fine-tuned learning rates for both vanilla-PINNs and DT-PINNs. Both DT-PINNs and vanilla-PINNs used a constant NN depth of s = 4 layers with 50 nodes each across all runs. |