Generalized Majorization-Minimization
Authors: Sobhan Naderi Parizi, Kun He, Reza Aghajani, Stan Sclaroff, Pedro Felzenszwalb
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate G-MM and MM algorithms on k-means clustering and LS-SVM training on various datasets. We conduct experiments on four clustering datasets: Norm25 (Arthur & Vassilvitskii, 2007), D31 (Veenman et al., 2002), Cloud (Arthur & Vassilvitskii, 2007), and GMM200. |
| Researcher Affiliation | Collaboration | 1Google Research 2Facebook Reality Labs 3University of California San Diego 4Boston University 5Brown University. |
| Pseudocode | Yes | Algorithm 1 G-MM optimization |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described, nor does it explicitly state that the code is released or available in supplementary materials. |
| Open Datasets | Yes | We conduct experiments on four clustering datasets: Norm25 (Arthur & Vassilvitskii, 2007), D31 (Veenman et al., 2002), Cloud (Arthur & Vassilvitskii, 2007), and GMM200. mammals dataset (Heitz et al., 2009). MIT-Indoor dataset (Quattoni & Torralba, 2009) |
| Dataset Splits | Yes | We report 5-fold cross-validation performance. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like Histogram of Oriented Gradients (HOG) and PCA, but it does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We set λ = 0.4 in (8). We set the number of folds to K = 10 in our experiments. η = 0.1 in all the experiments. |