Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Faster Query Times for Fully Dynamic $k$-Center Clustering with Outliers

Authors: Leyla Biabani, Annika Hennes, Morteza Monemizadeh, Melanie Schmidt

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We consider this problem in the fully dynamic model, i.e., under insertions and deletions of points, for the case that the metric space has a bounded doubling dimension dim. We utilize a hierarchical data structure to maintain the points and their neighborhoods, which enables us to efficiently find the clusters. In particular, our data structure can be queried at any time to generate a (3 + ε)-approximate solution for input values of k and z in worst-case query time ε O(dim)k log n log log , where is the ratio between the maximum and minimum distance between two points in P. Moreover, it allows insertion/deletion of a point in worst-case update time ε O(dim) log n log .
Researcher Affiliation Academia Leyla Biabani Eindhoven University of Technology Eindhoven, The Netherlands EMAIL Annika Hennes Heinrich Heine University Düsseldorf Düsseldorf, Germany EMAIL Morteza Monemizadeh Eindhoven University of Technology Eindhoven, The Netherlands EMAIL Melanie Schmidt Heinrich Heine University Düsseldorf Düsseldorf, Germany EMAIL
Pseudocode Yes Procedure MAXCOVERAGE(k,r); Algorithm 1: FINDCENTERS(k, z)
Open Source Code No The paper does not include any statement about releasing source code or a link to a code repository.
Open Datasets No This paper is theoretical and does not involve empirical evaluation on datasets. Therefore, it does not mention public datasets or their access information.
Dataset Splits No This paper is theoretical and does not involve empirical evaluation on datasets. Therefore, it does not mention dataset splits for training, validation, or testing.
Hardware Specification No This paper is theoretical and does not involve running experiments on specific hardware. Therefore, no hardware specifications are provided.
Software Dependencies No This paper is theoretical and does not describe an implementation that would require specific software versions for reproducibility.
Experiment Setup No This paper is theoretical and does not involve empirical experimentation. Therefore, it does not provide details about an experimental setup, such as hyperparameters or training settings.