FESSNC: Fast Exponentially Stable and Safe Neural Controller
Authors: Jingdong Zhang, Luan Yang, Qunxi Zhu, Wei Lin
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the superiority of the FESSNC over the existing methods. In this section, we demonstrate the superiority of the FESSNC over existing methods using several case studies, then we validate the scalability of FESSNC to the high dimensional system with an application in synchronizing the coupled oscillators. |
| Researcher Affiliation | Academia | 1School of Mathematical Sciences, LMNS, and SCMS, Fudan University, China. 2Research Institute of Intelligent Complex Systems, Fudan University, China. 3State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, China. 4Shanghai Artificial Intelligence Laboratory, China. |
| Pseudocode | Yes | Algorithm 1 FESSNC |
| Open Source Code | Yes | Our code is available at github.com/jingddong-zhang/FESSNC. |
| Open Datasets | No | The paper describes generating training data by sampling from the defined models (e.g., 'sample 500 data from the safe region' for double pendulum and kinetic bicycle models) rather than using an external, pre-existing publicly available dataset with a direct link or citation. |
| Dataset Splits | No | The paper mentions sampling training data (e.g., 'sample 500 data') and testing on a number of trajectories (e.g., 'test the learned controller on 10 sample temporal trajectories') but does not specify explicit train/validation/test dataset splits, percentages, or methodology for splitting. |
| Hardware Specification | Yes | The computing device that we use for calculating our examples includes a single i7-10870 CPU with 16GB memory, and we train all the parameters with Adam optimizer. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'Adam optimizer' but does not specify their version numbers or other software dependencies with specific versions required for reproducibility. |
| Experiment Setup | Yes | We set ε = 1e-3, c = 0.1. We train the parameters with lr = 0.1 for 300 steps. We set ε = 1e-3, c = 0.5. We train the parameters with lr = 0.05 for 500 steps. |