Online Learning in Betting Markets: Profit versus Prediction

Authors: Haiqing Zhu, Alexander Soen, Yun Kuen Cheung, Lexing Xie

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the efficiency of Algorithms 1 and 2 empirically4. An advantage of our theoretic results is that they hold for a wide range of bettor belief distributions, only requiring weak assumptions. Our empirical analysis aims to elucidate how different properties of the belief distributions (not captured by theory) change the performance of our algorithms. Fig. 2 summarises our observations of Algorithm 1. We use four different initialisations, and set the learning rate as ηt+1 = 300/(t + 5000). As a baseline, we compare this to a risk-balancing heuristic...
Researcher Affiliation Academia 1School of Computing, The Australian National University, Canberra, Australia 2RIKEN Center for Advanced Intelligence Project, Tokyo, Japan.
Pseudocode Yes Algorithm 1 Online SA Algorithm (page 4) and Algorithm 2 Follow The Leader (page 5).
Open Source Code Yes Code and data to reproduce results are found at: https://github. com/haiqingzhu543/Betting-Market-Simulation-2024.
Open Datasets No The paper uses simulated data generated for its experiments. It states: 'We generate 105 Kelly bettors with a mixture of beliefs one Gaussian for event A and B respectively, followed by a sigmoid function to ensure that beliefs lie within (0, 1), i.e. pt = sigmoid(st), t = 1, . . . , 105 with st 0.25 N(2, 1)+0.75 N( 1, 1).'
Dataset Splits No The paper does not provide specific dataset split information for training, validation, or testing. It mentions using '100,000 bettors' for simulations but no explicit splits.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments are provided in the paper.
Software Dependencies No No specific ancillary software details with version numbers (e.g., library or solver names with version numbers) are provided in the paper.
Experiment Setup Yes We use four different initialisations, and set the learning rate as ηt+1 = 300/(t + 5000).