Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Neural Lyapunov Control
Authors: Ya-Chien Chang, Nima Roohi, Sicun Gao
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show experiments on how the new methods obtain high-quality solutions for challenging robot control problems such as path tracking for wheeled vehicles and humanoid robot balancing. We experimented with several challenging nonlinear control problems in robotics, such as drone landing, wheeled vehicle path following, and humanoid robot balancing. |
| Researcher Affiliation | Academia | Ya-Chien Chang UCSD EMAIL Nima Roohi UCSD EMAIL Sicun Gao UCSD EMAIL |
| Pseudocode | Yes | We provide pseudocode of the algorithm in Algorithm 1. |
| Open Source Code | Yes | Ya-Chien Chang, Nima Roohi, and Sicun Gao. Neural Lyapunov control (project website), https://yachienchang.github.io/Neur IPS2019. |
| Open Datasets | No | The paper uses dynamical systems as its problem domain (e.g., inverted pendulum, Caltech ducted fan), rather than traditional datasets. It samples states from the system's state space for learning, but does not provide access to a specific, pre-collected, publicly available dataset. |
| Dataset Splits | No | The paper describes a learning framework that generates samples from the state space and uses a falsifier to find counterexamples, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions software like 'SMT solvers such as d Real' and uses 'stochastic gradient descent' and 'tanh activation functions' for neural networks, but it does not provide specific version numbers for any of these software components or libraries. |
| Experiment Setup | Yes | In all the examples, we use a learning rate of 0.01 for the learner, an ε value of 0.25 and δ value of 0.01 for the falsifier, and re-verify the result with smaller ε in Table 1. |