Distributed $k$-Clustering for Data with Heavy Noise

Authors: Shi Li, Xiangyu Guo

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Implementation of the our algorithm for (k, z)-center shows that it outperforms many previous algorithms, both in terms of the communication cost and quality of the output solution. We perform experiments comparing our main algorithm stated in Theorem 1.1 with many previous ones on real-world datasets. The results show that it matches the state-of-art method in both solution quality (objective value) and communication cost.
Researcher Affiliation Academia Xiangyu Guo University at Buffalo Buffalo, NY 14260 xiangyug@buffalo.edu Shi Li University at Buffalo Buffalo, NY 14260 shil@buffalo.edu
Pseudocode Yes Algorithm 1 kzc(k, z , (Q, w ), L ) Algorithm 2 aggregating(Q, L, y) Algorithm 3 dist-kzc input on all parties: n, k, z, m, L, ϵ input on machine i: dataset Pi with |Pi| = ni output: a set C P or No (which certifies L > L)
Open Source Code No The paper does not provide any specific links or explicit statements about the availability of source code for the methodology described.
Open Datasets No We perform experiments comparing our main algorithm stated in Theorem 1.1 with many previous ones on real-world datasets.
Dataset Splits No The paper mentions running experiments on 'real-world datasets' but does not specify any details regarding training, validation, or test splits, nor does it refer to standard splits with citations.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory amounts, or specific computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific software details like library or solver names with version numbers.
Experiment Setup No The paper discusses the theoretical algorithms and their complexity but does not provide specific experimental setup details such as hyperparameter values, optimizer settings, or training configurations.