Almost Surely Stable Deep Dynamics
Authors: Nathan Lawrence, Philip Loewen, Michael Forbes, Johan Backstrom, Bhushan Gopaluni
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the utility of each approach through numerical examples. 6 Experiments |
| Researcher Affiliation | Collaboration | Nathan P. Lawrence Department of Mathematics University of British Columbia Philip D. Loewen Department of Mathematics University of British Columbia Michael G. Forbes Honeywell Process Solutions Johan U. Backström Backstrom Systems Engineering Ltd. R. Bhushan Gopaluni Department of Chemical and Biological Engineering University of British Columbia |
| Pseudocode | No | The paper describes algorithmic procedures in text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for our methods is available here: https://github.com/NPLawrence/stochastic_dynamics. |
| Open Datasets | No | The paper describes generating training data from specified systems (e.g., system (16), Eq. (18) with discretization) but does not provide access details or specify a publicly available dataset. |
| Dataset Splits | No | The paper mentions 'training data' and 'initial conditions not seen during training' but does not provide specific details on how the dataset was split into training, validation, and test sets (e.g., percentages or counts). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'fully connected feedforward neural network' and refers to PyTorch in its references, but it does not specify exact version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | For the deterministic case (Example 2), we used a 3-layer neural network with 64 units per layer and ReLU activations for ˆf. The Lyapunov function V was a 2-layer ICNN with 32 units per layer and ReLU activations. We used a learning rate of 1e-3 and trained for 1000 epochs. For the LNN case, we trained for 10000 epochs. We used a batch size of 64 and the Adam optimizer (Kingma and Ba, 2014). For the stochastic case, we used a 3-layer MDN with 64 units per layer and ReLU activations. The Lyapunov function V was the same as the deterministic case. We used a learning rate of 1e-4 and trained for 10000 epochs. We used a batch size of 64 and the Adam optimizer. |