Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Smoothed Online Learning for Prediction in Piecewise Affine Systems
Authors: Adam Block, Max Simchowitz, Russ Tedrake
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper builds on the recently developed smoothed online learning framework and provides the first algorithms for prediction and simulation in PWA systems whose regret is polynomial in all relevant problem parameters under a weak smoothness assumption; moreover, our algorithms are efficient in the number of calls to an optimization oracle. |
| Researcher Affiliation | Academia | Adam Block Department of Mathematics MIT EMAIL Max Simchowitz MIT EMAIL Russ Tedrake MIT |
| Pseudocode | Yes | Algorithm 1 Main Algorithm |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper focuses on theoretical analysis and algorithms for PWA systems, using abstract data notations (e.g., 'covariates xt', 'responses yt', 'data (x1:s, y1:s)') but does not specify or provide access information for any publicly available dataset. |
| Dataset Splits | No | The paper presents theoretical algorithms and regret bounds, but it does not describe an experimental setup that would involve specific training, validation, and test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe an experimental setup requiring specific hardware specifications. |
| Software Dependencies | No | The paper is theoretical and focuses on algorithm design and theoretical guarantees, thus it does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper outlines algorithms and provides theoretical guarantees (e.g., regret bounds) rather than empirical experiment details, and therefore does not specify concrete hyperparameter values or training configurations. |