The noise level in linear regression with dependent data

Authors: Ingvar Ziemann, Stephen Tu, George J. Pappas, Nikolai Matni

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We derive upper bounds for random design linear regression with dependent (βmixing) data absent any realizability assumptions. In contrast to the strictly realizable martingale noise regime, no sharp instance-optimal non-asymptotics are available in the literature. Up to constant factors, our analysis correctly recovers the variance term predicted by the Central Limit Theorem the noise level of the problem and thus exhibits graceful degradation as we introduce misspecification. Past a burn-in, our result is sharp in the moderate deviations regime, and in particular does not inflate the leading order term by mixing time factors.
Researcher Affiliation Collaboration Ingvar Ziemann University of Pennsylvania Stephen Tu Google Research George J. Pappas University of Pennsylvania Nikolai Matni University of Pennsylvania
Pseudocode No The paper is highly theoretical, focusing on mathematical derivations and proofs, and does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statements about releasing code or links to a code repository.
Open Datasets No This is a theoretical paper that does not conduct experiments on a specific dataset, thus no dataset availability information is provided.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with dataset splits.
Hardware Specification No The paper does not mention any hardware specifications, as it is a theoretical work and does not involve computational experiments requiring specific hardware.
Software Dependencies No The paper does not specify any software dependencies or versions, as it is a theoretical work and does not involve software implementations.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations.