Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Greedy Sampling for Approximate Clustering in the Presence of Outliers
Authors: Aditya Bhaskara, Sharvaree Vadgama, Hong Xu
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we demonstrate the empirical performance of our algorithm on multiple real and synthetic datasets, and compare it to existing heuristics. |
| Researcher Affiliation | Academia | Aditya Bhaskara University of Utah EMAIL Sharvaree Vadgama University of Utah EMAIL Hong Xu University of Utah EMAIL |
| Pseudocode | Yes | Algorithm 1 k-center with outliers |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-sourcing the code for the methodology described. |
| Open Datasets | Yes | All real datasets we use are available from the UCI repository [15]. |
| Dataset Splits | No | The paper mentions datasets used but does not provide specific training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not specify any particular hardware (GPU, CPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | In order to simulate corruptions, we randomly choose 2.5% of the points in the datasets and corrupt all the coordinates by adding independent noise in a pre-deο¬ned range. |