Explainable k-Means and k-Medians Clustering

Authors: Michal Moshkovitz, Sanjoy Dasgupta, Cyrus Rashtchian, Nave Frost

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We provide several new theoretical results on explainable k-means and k-medians clustering. Our new algorithms and lower bounds are summarized in Table 1.
Researcher Affiliation Academia 1University of California, San Diego 2Tel Aviv University. Correspondence to: Nave Frost <navefrost@mail.tau.ac.il>, Michal Moshkovitz <mmoshkovitz@eng.ucsd.edu>, Cyrus Rashtchian <crashtchian@eng.ucsd.edu>.
Pseudocode Yes Algorithm 1 ITERATIVE MISTAKE MINIMIZATION
Open Source Code No The paper does not provide any links to source code or explicitly state that the code for the described methodology is open-source or publicly available.
Open Datasets No The paper uses abstract data sets (e.g., "a set of points X = {x1, . . . , xn} ∈ Rd") and provides illustrative examples (e.g., Figure 1, Figure 3) but does not mention or provide access information for any specific publicly available datasets used for empirical training or evaluation.
Dataset Splits No The paper is theoretical and does not report on empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and focuses on algorithm design and analysis. It does not mention any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and describes algorithms and proofs. It does not list any specific software dependencies with version numbers required for replication.
Experiment Setup No The paper is theoretical and presents algorithms and their guarantees. It does not detail an experimental setup including hyperparameters or training configurations for empirical evaluations.