Iterative Linearized Control: Stable Algorithms and Complexity Guarantees
Authors: Vincent Roulet, Siddhartha Srinivasa, Dmitriy Drusvyatskiy, Zaid Harchaoui
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments We illustrate the performance of the algorithms considered including the proposed accelerated regularized Gauss Newton algorithm on two classical problems drawn from Li & Todorov (2004): swing-up a pendulum, and move a twolinks robot arm. |
| Researcher Affiliation | Academia | 1Department of Statistics, University of Washington, Seattle, USA 2Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, USA 3Department of Mathematics, University of Washington, Seattle, USA. |
| Pseudocode | Yes | Algorithm 1 Accelerated Regularized Gauss-Newton |
| Open Source Code | Yes | The code for this project is available at https://github.com/vroulet/ilqc. |
| Open Datasets | No | The paper refers to 'inverted pendulum' and 'twolinks arm' as classical problems drawn from Li & Todorov (2004), but it does not provide concrete access information (link, DOI, repository, or specific citation for a dataset file) for any publicly available or open dataset used in the experiments. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions Py Torch and Tensor Flow but does not specify their version numbers or any other key software components with specific versions needed for replication. |
| Experiment Setup | Yes | For ILQR, we use an Armijo line-search to compute the next step. For both the regularized ILQR and the accelerated regularized ILQR, we use a constant step-size sequence tuned after a burn-in phase of 5 iterations. |