Adaptive Selective Sampling for Online Prediction with Experts
Authors: Rui Castro, Fredrik Hellström, Tim van Erven
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we present numerical experiments empirically showing that the normalized regret of the label-efficient forecaster can asymptotically match known minimax rates for pool-based active learning, suggesting it can optimally adapt to benign settings. To further examine the performance of the label-efficient forecaster, we conduct a simulation study. |
| Researcher Affiliation | Academia | Rui M. Castro Eindhoven University of Technology, Eindhoven Artificial Intelligence Systems Institute (EAISI) rmcastro@tue.nl Fredrik Hellström University College London f.hellstrom@ucl.ac.uk Tim van Erven University of Amsterdam tim@timvanerven.nl |
| Pseudocode | No | The paper describes the algorithms in prose and mathematical equations, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The full code, which can be executed in less than one hour on an M1 processor, is provided in the supplementary material. |
| Open Datasets | No | For the simulations, we use the specific choice ζ(x) = 2sign(x τ0)|x τ0|κ−1 , to generate sequences (Y1, . . . , Yn), based on a sequence of features (X1, . . . , Xn) sampled from the uniform distribution on [0, 1]. |
| Dataset Splits | No | The paper describes a sequential prediction problem where data is generated for n=50000 rounds. It does not mention traditional train/validation/test dataset splits as it operates in an online setting rather than a batch setting with a fixed dataset. |
| Hardware Specification | Yes | The full code, which can be executed in less than one hour on an M1 processor, is provided in the supplementary material. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | In the simulations, we set τ0 = 1/2 and N = n + 1 { n is even}. This choice enforces that N is odd, ensuring the optimal classifier is one of the experts. Throughout, we set η = sqrt(8 ln(N)/n), which minimizes the regret bound (7). |