Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression

Authors: Behzad Azmi, Dante Kalise, Karl Kunisch

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An extended set of low and high-dimensional numerical tests in nonlinear optimal control reveal that enriching the dataset with gradient information reduces the number of training samples, and that the sparse polynomial regression consistently yields a feedback law of lower complexity.
Researcher Affiliation Academia Behzad Azmi EMAIL Radon Institute for Computational and Applied Mathematics Austrian Academy of Sciences Altenbergerstraße 69, A-4040 Linz, Austria Dante Kalise EMAIL School of Mathematical Sciences University of Nottingham University Park, Nottingham NG7 2QL, United Kingdom Karl Kunisch EMAIL Radon Institute for Computational and Applied Mathematics Austrian Academy of Sciences and Institute of Mathematics and Scientific Computing University of Graz Heinrichstraße 36, A-8010 Graz, Austria
Pseudocode Yes Algorithm 1 Barzilai-Borwein two-point step-size gradient method
Open Source Code No The paper states: "Both sampling and regression algorithms were implemented in MATLAB R2014b, and the numerical tests were run in a Mac Book Pro with 2.9 GHz Dual-Core Intel Core i5 and memory 16 GB 1867 MHz DDR3." This describes their implementation environment but does not explicitly state that the code for their methodology is open-source or provide a link to a repository.
Open Datasets No Generating the samples. For each test we fixed an n dimensional hyperrectangle as the domain for sampling initial condition vectors {xj}N j=1 Rn. These initial vectors were generated using Halton quasi-random sequences2 in dimension n.
Dataset Splits Yes We split the sampling dataset {xj, V j, V j x }N j=1 into two sets: a set of training indices Itr which is used for regression, and a set of validation indices Ival, with Ival Itr = {1, . . . , N}. Without loss of generality, we assume that Itr = {1, . . . , Nd} and Ival = {Nd + 1, . . . , N} for N N with Nd < N.
Hardware Specification Yes Both sampling and regression algorithms were implemented in MATLAB R2014b, and the numerical tests were run in a Mac Book Pro with 2.9 GHz Dual-Core Intel Core i5 and memory 16 GB 1867 MHz DDR3.
Software Dependencies Yes Both sampling and regression algorithms were implemented in MATLAB R2014b, and the numerical tests were run in a Mac Book Pro with 2.9 GHz Dual-Core Intel Core i5 and memory 16 GB 1867 MHz DDR3.
Experiment Setup Yes Every optimal control problem was solved in the reduced form by using Algorithm 1 with tol = 10 5 as discussed in Section 2.1. For problem (Pā„“1) and (APā„“1) we chose the sparse penalty parameter Ī» = 0.002 and Ī» = {0.01, 0.02}, respectively. The linear least square problems (Pā„“2) and (APā„“2) were solved using a preconditioned conjugate gradient method, and the algorithm was terminated when the norm of residual was less than 10 8. For the LASSO regressions (Pā„“1) and (APā„“1), we employed Algorithm 2 with tol = 10 5.