Sample Complexity of Forecast Aggregation

Authors: Tao Lin, Yiling Chen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we initiate the study of sample complexity of forecast aggregation... We show that however, even in this benign model, optimal aggregation in general needs exponentially many samples. ... The main technical part of our paper is to prove the Ω(mn 2/ε) lower bound on the sample complexity of forecast aggregation for the general case, via a reduction from the distribution learning problem.
Researcher Affiliation Academia Tao Lin Harvard University Cambridge, MA 02138 tlin@g.harvard.edu Yiling Chen Harvard University Cambridge, MA 02138 yiling@seas.harvard.edu
Pseudocode No The paper does not contain pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide an unambiguous statement or link to open-source code for the methodology described.
Open Datasets No The paper refers to 'past data' and 'historical weather forecasts' as examples but does not specify a particular publicly available dataset or provide access information for any dataset used in their theoretical analysis.
Dataset Splits No This is a theoretical paper that does not conduct empirical experiments, and therefore does not provide dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers for experimental reproducibility.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and analyses. It does not provide details about an experimental setup, such as hyperparameters or system-level training settings.