Consistent k-Clustering
Authors: Silvio Lattanzi, Sergei Vassilvitskii
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we show experimentally that our approach performs much better than the theoretical bound, with the number of changes growing approximately as O(log n). |
| Researcher Affiliation | Industry | 1Google, Zurich, Switzerland 2Google, New York, New York, USA. |
| Pseudocode | Yes | Algorithm 1: Single Meyerson sketch; Algorithm 2: Compute Meyerson(Xt, φ); Algorithm 3: Update Meyerson(M1, . . . , Ms, xt, φ); Algorithm 4: Create Weighted Instance(M1, . . . , Ms, φ, Xt); Algorithm 5: Update Weights(M, w, x); Algorithm 6: Consistent k-clustering algorithm |
| Open Source Code | No | The paper does not provide any specific links or statements indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate our algorithm on three datasets from the UCI Repository (Lichman, 2013) that vary in data size and dimensionality. ... UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. |
| Dataset Splits | No | The paper mentions evaluating on datasets but does not specify train/validation/test splits, percentages, or sample counts for these datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions using k-means++ and a local search algorithm, but it does not specify software names with version numbers for reproducibility. |
| Experiment Setup | No | The paper describes the datasets and some algorithm modifications, but it does not provide specific experimental setup details such as hyperparameters or system-level training settings. |