Machine learning structure preserving brackets for forecasting irreversible processes
Authors: Kookjin Lee, Nathaniel Trask, Panos Stinis
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we assess the performance of the three parameterizations of the ODE dynamics which apply progressively more stringent priors. We implement the algorithms in PYTHON 3.6.5, NUMPY 1.16.2, and PYTORCH 1.7.1 [48]. For the time integrator, we use a PYTORCH implementation of differentiable ODE solvers, Torch Diff Eq [3]. All experiments are performed on MACBOOK PRO with 2.9 GHz i9 CPU and 32 GB memory. |
| Researcher Affiliation | Collaboration | Kookjin Lee School of Computing and Augmented Intelligence Arizona State University Tempe, AZ 85281 Nathaniel Trask Center for Computing Research Sandia National Laboratories Albuquerque, NM 87123 natrask@sandia.gov Panos Stinis Pacific Northwest National Laboratory Richland, WA 99354 |
| Pseudocode | Yes | Algorithm 1: Neural ODE training; Algorithm 2: Penalty or GENERIC training |
| Open Source Code | No | The information is insufficient. The paper does not provide concrete access to source code for the methodology described, nor does it contain an explicit statement of code release or a repository link. |
| Open Datasets | Yes | Data for all considered benchmark problems can be found in [51]. |
| Dataset Splits | Yes | We then split the sequence into three segments, [0, ttrain], (ttrain, tval], and (tval, ttest] for training, validation, and test such that 0 < ttrain < tval < ttest. |
| Hardware Specification | Yes | All experiments are performed on MACBOOK PRO with 2.9 GHz i9 CPU and 32 GB memory. |
| Software Dependencies | Yes | We implement the algorithms in PYTHON 3.6.5, NUMPY 1.16.2, and PYTORCH 1.7.1 [48]. |
| Experiment Setup | Yes | For ODESolve, we use the Dormand Prince method (dopri5) [49] with relative tolerance 10 5 and absolute tolerance 10 6. The loss function L measures the discrepancy between the ground truth states and approximate states via mean absolute errors, and the network weights and biases are updated using Adamax [50] with an initial learning rate 0.01. ... For black-box neural ODEs, we simply use a stochastic gradient descent (SGD) optimizer to update the network weights and biases using the mini-batches on the observable states, {xo ℓ, xo ℓ+1, . . . , xo ℓ+L 1}. |