Learning Deep Input-Output Stable Dynamics
Authors: Ryosuke Kojima, Yuji Okamoto
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Also, we apply our method to a toy bistable model and the task of training a benchmark generated from a glucose-insulin simulator. The results show that the nonlinear system with neural networks by our method achieves the input-output stability, unlike naive neural networks. (Abstract) and We conduct two experiments to evaluate our proposed method. |
| Researcher Affiliation | Academia | Ryosuke Kojima Graduate School of Medicine Kyoto University Kyoto, 606-8501 kojima.ryosuke.8e@kyoto-u.ac.jp Yuji Okamoto Graduate School of Medicine Kyoto University Kyoto, 606-8501 okamoto.yuji.2c@kyoto-u.ac.jp |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/clinfo/Deep IOStability. |
| Open Datasets | No | Our dataset was reproduced by ourselves based on the literature (Checklist 4d) and We generate 1000 input and output signals for this experiment. (Section 5.2) and Using this simulator, 1000 input and output signals are synthesized for this experiment. (Section 5.3). The paper does not provide concrete access information for a publicly available or open dataset. |
| Dataset Splits | No | In our experiments, 90% of the dataset is used for training and the remaining 10% is used for testing. (Section 5.1). The paper does not explicitly mention a validation split. |
| Hardware Specification | Yes | For training each method with neural networks, an NVIDIA Tesla T4 GPU was used. Our experiments are totally run on 20 GPUs over about three days. |
| Software Dependencies | No | The paper mentions using Optuna for hyperparameter optimization but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | In these methods, the parameters of our loss function are set as λ = 0 and = 0.01. Also, DIOS-fgh+ uses λ = 0.01 and = 0.01 under the same conditions as DIOS-fgh. (Section 5.2) and use our loss function with λ = 0.001 and = 0.001. |