Efficient Online Set-valued Classification with Bandit Feedback

Authors: Zhou Wang, Xingye Qiao

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of BCCP is empirically validated using three different score functions and two policies (for pulling arm) across three datasets, demonstrating the versatility and efficacy of our proposed framework.
Researcher Affiliation Academia 1Department of Mathematics and Statistics, Binghamton University, New York, USA.
Pseudocode Yes Algorithm 1 Bandit Conformal; Algorithm 2 Bandit Conformal with Experts
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code for the methodology or a link to a code repository.
Open Datasets Yes Our experimental setup includes the CIFAR10, CIFAR100 (with 20 coarser labels), and SVHN datasets, each undergoing 5 replications.
Dataset Splits Yes In the split conformal method (Papadopoulos et al., 2002; Lei et al., 2013), the index set I associated with the original dataset D is partitioned into two disjoint subsets: the training part Itr and the calibration part Ical.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions employing the ResNet50 architecture and the ADAM optimizer but does not provide specific version numbers for these or other software libraries.
Experiment Setup Yes Consistently throughout the study, we maintain a non-coverage rate α = 0.05. For computational efficiency, the model training is performed on data batches of size 256, utilizing the ADAM optimizer with a learning rate of η1 = 10 4 in the model training phase. The entire online learning process spans T = 6000 iterations around.