Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Quantifying the Cost of Learning in Queueing Systems
Authors: Daniel Freund, Thodoris Lykouris, Wentao Weng
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we argue that an asymptotic metric, which focuses on late-stage performance, is insufficient to capture the intrinsic statistical complexity of learning in queueing systems which typically occurs in the early stage. Instead, we propose the Cost of Learning in Queueing (CLQ), a new metric that quantifies the maximum increase in time-averaged queue length caused by parameter uncertainty. We characterize the CLQ of a single-queue multi-server system, and then extend these results to multi-queue multi-server systems and networks of queues. In establishing our results, we propose a unified analysis framework for CLQ that bridges Lyapunov and bandit analysis, provides guarantees for a wide range of algorithms, and could be of independent interest.1 |
| Researcher Affiliation | Academia | Daniel Freund MIT Cambridge, MA 02139 EMAIL Thodoris Lykouris MIT Cambridge, MA 02139 EMAIL Wentao Weng MIT Cambridge, MA 02139 EMAIL |
| Pseudocode | Yes | Algorithm 1: UCB for a single-queue multi-server system |
| Open Source Code | No | No statement about providing open-source code or a link to a code repository is found. |
| Open Datasets | No | The paper describes parameters for a simulation (K = 5, ฮป = 0.45, ยต = (0.045, 0.35, 0.35, 0.35, 0.55)) for Figure 1, but this does not constitute a public dataset with access information. |
| Dataset Splits | No | The paper does not mention any training, validation, or test splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running experiments. |
| Software Dependencies | No | The paper does not provide any specific ancillary software details with version numbers. |
| Experiment Setup | No | The paper does not contain specific experimental setup details such as hyperparameter values or training configurations. |