Gradient Based Clustering
Authors: Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments on real data demonstrate the effectiveness of the proposed method. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA 2Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia 3Faculty of Sciences, University of Novi Sad, Novi Sad, Serbia. |
| Pseudocode | No | No, the paper describes the algorithm steps in paragraph and equation format but does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | No | No, the paper does not include any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The experiments presented in this section were performed on the MNIST dataset (Le Cun et al.). In Appendix C we present additional numerical experiments, performed on the Iris dataset (Fisher, 1936). |
| Dataset Splits | No | No, the paper uses a subset of the MNIST training dataset and the Iris dataset but does not explicitly describe a training/validation/test split for its experiments. It implicitly uses the selected data for clustering and evaluation. |
| Hardware Specification | No | No, the paper only vaguely mentions "on a GPU" without providing any specific model numbers, processor types, or memory details. |
| Software Dependencies | No | No, the paper mentions general software like Python and various libraries but does not provide specific version numbers for any key software components. |
| Experiment Setup | Yes | In line with our theory, we set the step-size equal to α = 1/N = 1/3500. For a fair comparison, we set the initial centers of both methods to be the same... We run the clustering experiments for 20 times... we fix the Huber loss parameter to δ = 10 and use the same step-size as in the standard K-means case, i.e., α = 1/3500. |