Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Linear Streaming Bandit: Regret Minimization and Fixed-Budget Epsilon-Best Arm Identification

Authors: Yuming Shao, Zhixuan Fang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of all proposed algorithms through experiments on both synthetic and real-world datasets. ... We implement our algorithms on both synthetic and real-world datasets. ... Their regret curves, averaged over N = 20 repetitions, are shown in Figure 2. ... The Kaggle dataset (Chaudhari 2023) contains information on more than 17k anonymous workers... The regret curves are shown in Figure 3.
Researcher Affiliation Academia 1IIIS, Tsinghua University, Beijing, China 2Shanghai Qi Zhi Institute, Shanghai, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Confidence Radius-directed Multi Pass Sampling (CR-MPS) Algorithm 2: ϵ-Grid Multi-Pass Successive Elimination (G-MP-SE) Algorithm 3: Sample Budget Assignment (SBA) Algorithm 4: Active Arm Counter Update (C-Update) Algorithm 5: Single-Pass Comparison (SPC)
Open Source Code No The paper mentions implementing algorithms and showing experimental results, but it does not provide any specific links to code repositories or explicit statements about code release.
Open Datasets Yes Real-World Dataset. The Kaggle dataset (Chaudhari 2023) contains information on more than 17k anonymous workers, including their resume details and performance scores. We select K = 10k of them and run these algorithms again on this dataset. The regret curves are shown in Figure 3. 1https://www.kaggle.com/datasets/sanjanchaudhari/employeesperformance-for-hr-analytics/data
Dataset Splits No The paper mentions using synthetic and real-world datasets and running 20 repetitions for the synthetic dataset, but it does not specify any train/test/validation splits for either dataset.
Hardware Specification No The paper mentions implementing algorithms and running experiments but provides no specific details about the hardware (e.g., CPU, GPU models, memory) used for these experiments.
Software Dependencies No The paper does not mention any specific software dependencies (e.g., libraries, frameworks) or their version numbers used for the implementation or experiments.
Experiment Setup No The paper describes the parameters of the proposed algorithms (e.g., confidence parameter, regularization parameter, precision parameters, sample budget, number of passes). However, it does not provide specific hyperparameter values for a model or system, such as learning rates, batch sizes, or optimizer settings, nor other typical experimental setup details like model initialization or training schedules.