Smoothed Online Learning for Prediction in Piecewise Affine Systems
Authors: Adam Block, Max Simchowitz, Russ Tedrake
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper builds on the recently developed smoothed online learning framework and provides the first algorithms for prediction and simulation in PWA systems whose regret is polynomial in all relevant problem parameters under a weak smoothness assumption; moreover, our algorithms are efficient in the number of calls to an optimization oracle. |
| Researcher Affiliation | Academia | Adam Block Department of Mathematics MIT ablock@mit.edu Max Simchowitz MIT msimchow@csail.mit.edu Russ Tedrake MIT |
| Pseudocode | Yes | Algorithm 1 Main Algorithm |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper focuses on theoretical analysis and algorithms for PWA systems, using abstract data notations (e.g., 'covariates xt', 'responses yt', 'data (x1:s, y1:s)') but does not specify or provide access information for any publicly available dataset. |
| Dataset Splits | No | The paper presents theoretical algorithms and regret bounds, but it does not describe an experimental setup that would involve specific training, validation, and test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe an experimental setup requiring specific hardware specifications. |
| Software Dependencies | No | The paper is theoretical and focuses on algorithm design and theoretical guarantees, thus it does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper outlines algorithms and provides theoretical guarantees (e.g., regret bounds) rather than empirical experiment details, and therefore does not specify concrete hyperparameter values or training configurations. |