Finite Sample Analysis Of Dynamic Regression Parameter Learning

Authors: Mark Kozdoba, Edward Moroshko, Shie Mannor, Yacov Crammer

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the approach on synthetic and real-world benchmarks. In Section 6 we present experimental results on synthetic and real data.
Researcher Affiliation Collaboration Mark Kozdoba Technion, Israel Institute of Technology markk@ef.technion.ac.il Edward Moroshko Technion, Israel Institute of Technology edward.moroshko@gmail.com Shie Mannor Technion, Israel Institute of Technology and NVIDIA Research shie@ee.technion.ac.il Koby Crammer Technion, Israel Institute of Technology koby@ee.technion.ac.il
Pseudocode Yes Algorithm 1 Spectrum Thresholding Variance Estimator (STVE)
Open Source Code No The short code is not provided at the moment, but can be fully derived from Algorithm 1.
Open Datasets Yes In this section we examine the relation between daily temperatures and electricity consumption in the data from Hong et al. (2014) (see also Hong (2016)). The data is publically available, see the references in Section 6.
Dataset Splits Yes We use the first half of the data (train set) to learn the parameters σ, η of the online regression (1)-(2) via MLE optimization and using STVE. We also use the train set to find the optimal learning rate α for the OG forecaster described by the update equation (5). This learning rate is chosen as the rate that yields smallest least squares forecast error on the train set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. While the ethics checklist asks about 'type of GPUs' and is marked 'Yes', no specific details are provided in the main text or supplementary material.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes The input dimension was n = 5, and the input sequence ut sampled from the Gaussian N(0, In). We use the first half of the data (train set) to learn the parameters σ, η of the online regression (1)-(2) via MLE optimization and using STVE. We also use the train set to find the optimal learning rate α for the OG forecaster described by the update equation (5). This learning rate is chosen as the rate that yields smallest least squares forecast error on the train set. Full details on the preprocessing of the data, as well as additional details on the experiments, are given in Supplementary Material Section I.