Adaptable Regression Method for Ensemble Consensus Forecasting

Authors: John Williams, Peter Neilley, Joseph Koval, Jeff McDonald

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The algorithm is illustrated for 0-72 hour temperature forecasts at over 1200 sites in the contiguous U.S. based on a 22-member forecast ensemble, and its performance over multiple seasons is compared to a state-of-the-art ensemble-based forecasting system.
Researcher Affiliation Industry The Weather Company, Andover, MA john.williams@weather.com
Pseudocode No The paper describes the methodology using mathematical equations and steps, but does not include a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide an explicit statement about open-sourcing code or a link to a code repository.
Open Datasets No The paper mentions using "Surface temperature measurements from over 1200 ground weather station ( METAR ) locations" and "hourly temperature forecasts from an ensemble of 22 input forecasts", but does not provide concrete access information (link, DOI, formal citation for public dataset) for this data.
Dataset Splits No The paper mentions "Cross-validation was not appropriate for this evaluation" and "experiments for determining good parameters were performed using a small number of odd-hour forecast lead times", but does not provide specific dataset split information (e.g., exact percentages, sample counts, or citations to predefined splits) for training, validation, or testing subsets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions "MATLAB s quadprog function" and "MATLAB s linsolve", but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes For these AR results, the bias modulation = 1 or 0.8, regularization parameter = 0 or 0.1, and error covariance aggregation proportion = 0 or 0.7; the bias aggregation proportion = 0.0 is fixed for all four. In this and all other AR runs shown in this paper, the bias and error covariance learning rates were fixed at and ; the goal weight ; and the weight limits were and .