Incentive-Compatible Forecasting Competitions

Authors: Jens Witkowski, Rupert Freeman, Jennifer Vaughan, David Pennock, Andreas Krause

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We lower-bound the probability that our mechanism selects the most accurate forecaster, and give rates for how quickly this bound approaches 1 as the number of events grows. Our techniques can be generalized to the related problems of outputting a ranking over forecasters and hiring a forecaster with high accuracy on future events.
Researcher Affiliation Collaboration Jens Witkowski ETH Zurich jensw@inf.ethz.ch Rupert Freeman Duke University rupert@cs.duke.edu Jennifer Wortman Vaughan Microsoft Research jenn@microsoft.com David M. Pennock Microsoft Research dpennock@microsoft.com Andreas Krause ETH Zurich krausea@ethz.ch
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It mentions external platforms like Netflix Prize and Kaggle, but no code release for their own work. The conclusion also states that evaluating ELF experimentally is a "future research direction", implying code is not yet public.
Open Datasets No The paper is theoretical and does not describe experiments using datasets, thus no information on public dataset availability or access is provided.
Dataset Splits No The paper is theoretical and does not describe experiments, thus no information on training/validation/test dataset splits is provided.
Hardware Specification No The paper is theoretical and does not describe experiments, thus no specific hardware details are provided.
Software Dependencies No The paper is theoretical and does not describe experiments, thus no specific software dependencies with version numbers are provided.
Experiment Setup No The paper is theoretical and does not describe experiments, thus no specific experimental setup details like hyperparameters or training configurations are provided.