Quantifying the Cost of Learning in Queueing Systems
Authors: Daniel Freund, Thodoris Lykouris, Wentao Weng
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we argue that an asymptotic metric, which focuses on late-stage performance, is insufficient to capture the intrinsic statistical complexity of learning in queueing systems which typically occurs in the early stage. Instead, we propose the Cost of Learning in Queueing (CLQ), a new metric that quantifies the maximum increase in time-averaged queue length caused by parameter uncertainty. We characterize the CLQ of a single-queue multi-server system, and then extend these results to multi-queue multi-server systems and networks of queues. In establishing our results, we propose a unified analysis framework for CLQ that bridges Lyapunov and bandit analysis, provides guarantees for a wide range of algorithms, and could be of independent interest.1 |
| Researcher Affiliation | Academia | Daniel Freund MIT Cambridge, MA 02139 dfreund@mit.edu Thodoris Lykouris MIT Cambridge, MA 02139 lykouris@mit.edu Wentao Weng MIT Cambridge, MA 02139 wweng@mit.edu |
| Pseudocode | Yes | Algorithm 1: UCB for a single-queue multi-server system |
| Open Source Code | No | No statement about providing open-source code or a link to a code repository is found. |
| Open Datasets | No | The paper describes parameters for a simulation (K = 5, λ = 0.45, µ = (0.045, 0.35, 0.35, 0.35, 0.55)) for Figure 1, but this does not constitute a public dataset with access information. |
| Dataset Splits | No | The paper does not mention any training, validation, or test splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running experiments. |
| Software Dependencies | No | The paper does not provide any specific ancillary software details with version numbers. |
| Experiment Setup | No | The paper does not contain specific experimental setup details such as hyperparameter values or training configurations. |