Constant Approximation for Individual Preference Stable Clustering
Authors: Anders Aamand, Justin Chen, Allen Liu, Sandeep Silwal, Pattara Sukprasert, Ali Vakilian, Fred Zhang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally evaluate our O(1)-IP stable clustering algorithm against k-means++, which is the empirically best-known algorithm in [1]. We also compare kmeans++ with our optimal algorithm for Min-IP stability. We run experiments on the Adult data set |
| Researcher Affiliation | Collaboration | Anders Aamand MIT aamand@mit.edu Justin Y. Chen MIT justc@mit.edu Allen Li MIT cliu568@mit.edu Sandeep Silwal MIT silwal@mit.edu Pattara Sukprasert Databricks pat.sukprasert@databricks.com Ali Vakilian TTIC vakilian@ttic.edu Fred Zhang UC Berkeley z0@berkeley.edu |
| Pseudocode | Yes | Algorithm 1 BALL-CARVING |
| Open Source Code | No | The paper does not provide a direct link to its own open-source code or explicitly state that the code for its methods is publicly available. It only mentions using the 'k-means++ implementation of Scikit-learn package [21]'. |
| Open Datasets | Yes | We run experiments on the Adult data set2 https://archive.ics.uci.edu/ml/datasets/adult; see [18]. For IP stability, we also use four more datasets from UCI ML repositoriy [11] |
| Dataset Splits | No | The paper mentions using various datasets but does not provide specific details on the training, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined standard splits). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running its experiments (e.g., GPU models, CPU models, or memory specifications). |
| Software Dependencies | No | The paper mentions 'Python 3' and 'Scikit-learn package [21]' but does not provide specific version numbers for these software dependencies, which are necessary for reproducible descriptions. |
| Experiment Setup | Yes | We use the k-means++ implementation of Scikit-learn package [21]... we set the parameter n_init=1 and then compute the average of many different runs. |