Physics-aware, probabilistic model order reduction with guaranteed stability
Authors: Sebastian Kaltenbach, Phaedon Stelios Koutsourelakis
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics where probabilistic, long-term predictions of phenomena not contained in the training data are produced. (Abstract) and 4 EXPERIMENTS (Section 4). |
| Researcher Affiliation | Academia | Technical University of Munich {sebastian.kaltenbach,p.s.koutsourelakis}@tum.de |
| Pseudocode | No | No pseudocode or algorithm blocks are included in the paper. |
| Open Source Code | No | No explicit statement or link providing concrete access to the source code for the methodology described in this paper was found. |
| Open Datasets | No | Our data consists of N times-series {x(i) 0:T }N i=1 over T time-steps generated by a computational physics simulator. (Section 2) and From this simulation every 800th microscopic time step the particle positions were extracted and used as training data for our system. (Appendix D.1). The paper describes data generated by a simulator, not a publicly available dataset with concrete access information. |
| Dataset Splits | No | No explicit details on validation dataset splits or methodology (e.g., k-fold cross-validation, specific percentages for splits) are provided. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory specifications) used for running experiments are explicitly mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., library names like PyTorch, TensorFlow, or specific solvers with versions) are explicitly listed. |
| Experiment Setup | Yes | Details of the parameterization can be found in Appendix D. (Section 4) and The architecture of the neural networks for the generative mappings described above as well as for the variational posteriors introduced in Section 2.2 can be seen in Figure 11. (Appendix D.1). Figures 11 and 13 show neural network architectures including layers, activation functions (ReLU), and dropout layers (Figure 13 caption). |