Global Convergence of Online Optimization for Nonlinear Model Predictive Control

Authors: Sen Na

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the global convergence behavior of the proposed RTI scheme in a numerical experiment.
Researcher Affiliation Academia Department of Statistics University of Chicago Chicago, IL 60637 senna@uchicago.edu
Pseudocode Yes Algorithm 1 An Adaptive RTI-based MPC Scheme
Open Source Code Yes The code is implemented in Julia 1.5.4 and is publicly available (with high resolution figures) at https://github.com/senna1128/Global-RTI-MPC.
Open Datasets No The paper does not use a traditional dataset, but rather simulates a '1D trigonometric perturbed LQR problem' with randomly generated initial iterates. There is no external, publicly available dataset mentioned with a link or citation.
Dataset Splits No The paper does not specify training, validation, or test dataset splits as it runs simulations with randomly generated initial iterates rather than using a fixed dataset.
Hardware Specification No The paper does not provide any specific hardware details used for running the experiments.
Software Dependencies Yes The code is implemented in Julia 1.5.4 and is publicly available (with high resolution figures) at https://github.com/senna1128/Global-RTI-MPC.
Experiment Setup Yes Table 1: Simulation Setups. ... For each case, we perform 1000 independent runs with randomly generalized initial iterate ( z0 0, λ 0 0), by letting (x0 k,0, u0 k,0, λ0 k,0) N(0, 25I), k. We stop the iteration if either t > N M (i.e. attains the iteration threshold) or Lt,0 ϵ = 10 8 (i.e. attains the error threshold). We let Bt = µI, t.