Approximation algorithms for stochastic clustering

Authors: David Harris, Shi Li, Aravind Srinivasan, Khoa Trinh, Thomas Pensyl

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We consider stochastic settings for clustering, and develop provably-good (approximation) algorithms for a number of these notions. These algorithms allow one to obtain better approximation ratios compared to the usual deterministic clustering setting. Additionally, they offer a number of advantages including providing fairer clustering and clustering which has better long-term behavior for each user. In particular, they ensure that every user is guaranteed to get good service (on average). We also complement some of these with impossibility results.
Researcher Affiliation Collaboration David G. Harris Department of Computer Science University of Maryland, College Park, MD 20742 davidgharris29@gmail.com, Shi Li University at Buffalo Buffalo, NY. shil@buffalo.edu, Thomas Pensyl Bandwidth, Inc. Raleigh, NC tpensyl@bandwidth.com, Aravind Srinivasan Department of Computer Science and Institute for Advanced Computer Studies University of Maryland, College Park, MD 20742 srin@cs.umd.edu, Khoa Trinh Google Mountain View, CA 94043 khoatrinh@google.com
Pseudocode Yes Algorithm 1 GREEDYCLUSTER(F, w), Algorithm 2 Iterative rounding algorithm for chance k-coverage, Algorithm 3 Rounding algorithm for k-coverage for uniform p or uniform r., Algorithm 4 Rounding algorithm for k-supplier, Algorithm 5 Rounding algorithm for k-center, Algorithm 7 Partial-cluster based algorithm for k-center
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology or provide links to a code repository.
Open Datasets No This is a theoretical research paper focusing on algorithm design and approximation ratios, and as such, it does not involve dataset training or require information about publicly available training datasets.
Dataset Splits No This is a theoretical research paper, and it does not describe experimental validation or data splits.
Hardware Specification No This is a theoretical research paper focused on algorithm design and analysis, and therefore, it does not describe any specific hardware used for experiments.
Software Dependencies No This is a theoretical research paper, and it does not specify software dependencies with version numbers used for experiments.
Experiment Setup No This is a theoretical research paper that focuses on algorithm design and analysis, and thus, it does not include details on experimental setup parameters like hyperparameters or training settings.