Efficient Online Clustering with Moving Costs

Authors: Dimitrios Christou, Stratis Skoulakis, Volkan Cevher

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section F of the Appendix we experimentally compare our algorithm with the algorithm of Fotakis et al. [31]. Our experiments verify that our algorithm is robust to increases of the facility weight γ while the algorithm of [31] presents a significant cost increase. We additionally experimentally evaluate our algorithm in the MNIST and CIFAR10 datasets. Our experimental evaluations suggest that the O(log n)-regret bound is a pessimistic upper bound and that in practise our algorithm performs significantly better.
Researcher Affiliation Academia Dimitris Christou UT Austin christou@cs.utexas.edu Stratis Skoulakis LIONS, EPFL efstratios.skoulakis@epfl.ch Volkan Cevher LIONS, EPFL volkan.cevher@epfl.ch
Pseudocode Yes Algorithm 1 O(1)-regret for HSTs. [...] Algorithm 2 O(log n)-regret for graphs. [...] Algorithm 3 FTRL with dilated entropic regularization [...] Algorithm 4 Cut&Round. [...] Algorithm 5 Alloc.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We evaluate the performance of Algorithm 2 on the MNIST and CIFAR10 datasets.
Dataset Splits No The paper describes how images are sampled and clients arrive in an online learning setting, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or counts, or refer to standard predefined splits for these purposes.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, that would be needed to replicate the experiment.
Experiment Setup Yes In all the following experiments the step-size of Algorithm 3 (subroutine of Algorithm 2) is set to η := max(γ, 1) / (k n T) 1/2. [...] In this experiment the underlying graph is the 0.01-discretization of [0, 1] [0, 1]. At each round t 1, we periodically select one of four balls of radius R = 0.2 depicted in Figure 1 and then a client arrives uniformly at random on the selected ball. [...] We then evaluate Algorithm 2 in the latter setting for T = 3000 rounds and k = 10 facilities. [...] This time, the requests arrive in batches of size R = 10 for T = 3000 rounds.