Gambling-Based Confidence Sequences for Bounded Random Vectors
Authors: Jongha Jon Ryu, Gregory W. Wornell
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulations demonstrate the tightness of these confidence sequences compared to existing methods. When applied to the sampling without-replacement setting for finite categorical data, it is shown that the resulting CS based on a universal gambling strategy is provably tighter than that of the posterior-prior ratio martingale proposed by Waudby-Smith and Ramdas. |
| Researcher Affiliation | Academia | 1Department of EECS, MIT, Cambridge, Massachusetts, USA. Correspondence to: J. Jon Ryu <jongha@mit.edu>. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | We have open-sourced the implementation of the proposed confidence sequences and provided the codes to reproduce the simulation results online: https://github.com/jongharyu/confidence-sequence-via-gambling. |
| Open Datasets | No | The paper describes using i.i.d. categorical data, a finite population of balls for sampling without replacement, and i.i.d. Dirichlet observations. These are data generation methods or specific finite populations described in the paper, not references to publicly available or open datasets with concrete access information (e.g., links, DOIs, citations to established benchmarks). |
| Dataset Splits | No | The paper describes simulation setups with parameters like length T=100 or a population of 1000 balls, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) for the data used in experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU/CPU models, processor types, or memory amounts. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., library names like PyTorch, TensorFlow, or specific solvers with their versions) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | To demonstrate the performance of the KT CS (denoted as KT(K), we run simulations with i.i.d. categorical observations of length T = 100, with different mean vectors of dimension K {3, 4, 5}; see Fig. 1, where the title of each column indicates the underlying mean. Each experiment was run with 100 random realizations and δ = 0.05 was used. We consider a finite population of 1000 balls consisting of 600 red, 250 green, and 150 blue balls, and report the result averaged over 1000 random permutations. |