Neural Lyapunov Control of Unknown Nonlinear Systems with Stability Guarantees

Authors: Ruikun Zhou, Thanin Quartz, Hans De Sterck, Jun Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the effectiveness of the approach with a set of numerical experiments.
Researcher Affiliation Academia Ruikun Zhou Department of Applied Mathematics University of Waterloo ruikun.zhou@uwaterloo.ca Thanin Quartz Department of Applied Mathematics University of Waterloo tquartz@uwaterloo.ca Hans De Sterck Department of Applied Mathematics University of Waterloo hans.desterck@uwaterloo.ca Jun Liu Department of Applied Mathematics University of Waterloo j.liu@uwaterloo.ca
Pseudocode Yes The algorithmic structure can be found in Fig. 1 and the pseudocode in Algorithm 1.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Section 5 and Appendix A.3 where the link of the code is provided.
Open Datasets No We use 9 million data points sampled on ( 1.5, 1.5) ( 1.5, 1.5)... The dataset we use is generated by ourselves. The paper does not provide concrete access information (link, DOI, repository) for this generated dataset.
Dataset Splits No The paper does not provide specific details regarding dataset splits (e.g., percentages or sample counts) for training, validation, or testing.
Hardware Specification Yes All the training of FNN is performed on Google Colab with a 16GB GPU, and VNN training is done on a 3 GHz 6-Core Intel Core i5.
Software Dependencies No The paper mentions using the "Adam optimizer" and "d Real as the SMT solver" but does not provide specific version numbers for these software components, which is necessary for reproducibility.
Experiment Setup Yes For learning the dynamics, the number of neurons in the hidden layer varies from 100 to 200... For learning the neural Lyapunov function, there are six neurons in the hidden layer for all the experiments... we use the Adam optimizer for both FNN and VNN, and we use d Real as the SMT solver, setting the precision δ for the falsification as 0.01 for all experiments... the learning rate of the training process varies from 0.1 to 1e-5.