k-means++: few more steps yield constant approximation
Authors: Davin Choo, Christoph Grunau, Julian Portmann, Vaclav Rozhon
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we improve their analysis to show that, for any arbitrarily small constant ε > 0, with only εk additional local search steps, one can achieve a constant approximation guarantee (with high probability in k), resolving an open problem in their paper. Our improvements rely on the following structural observation: After running k-means++, most of the clusters of the optimal solution are already well approximated with high probability in k. |
| Researcher Affiliation | Academia | 1ETH Z urich. |
| Pseudocode | Yes | Algorithm 1 k-means++ seeding; Algorithm 2 One step of Local Search++ |
| Open Source Code | No | No explicit statement or link for open-source code is found in the paper. |
| Open Datasets | No | The paper is theoretical and does not use or evaluate on a specific public dataset. |
| Dataset Splits | No | The paper is theoretical and does not report on empirical experiments or data splits. |
| Hardware Specification | No | The paper is theoretical and does not describe hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not provide details about an experimental setup or hyperparameters. |