Balanced Clustering with Least Square Regression
Authors: Hanyang Liu, Junwei Han, Feiping Nie, Xuelong Li
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on seven real-world benchmarks demonstrate that our approach not only produces good clustering performance but also guarantees a balanced clustering result. |
| Researcher Affiliation | Academia | Hanyang Liu,1 Junwei Han,1 Feiping Nie,2 Xuelong Li3 1School of Automation, Northwestern Polytechnical University, Xi an, 710072, P. R. China 2School of Computer Science and Center for OPTIMAL, Northwestern Polytechnical University, Xi an, 710072, P. R. China 3Center for OPTIMAL, State Key Laboratory of Transient Optics and Photonics, Xi an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi an, 710119, Shaanxi, P. R. China |
| Pseudocode | Yes | Algorithm 1 Algorithm of General Method of ALM |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Seven real-world datasets are used in the experiments, including two UCI datasets, Wine and Ionosphere , and five face datasets, UMIST , YALE-B, AR, JAFFE , and CMU-PIE. For the high dimensional datasets, dimension reduction is performed in the preprocessing of the datasets. ... https://archive.ics.uci.edu/ml/index.html http://images.ee.umist.ac.uk/danny/database.html http://www.kasrl.org/jaffe.html |
| Dataset Splits | No | The paper uses various real-world datasets but does not explicitly specify training/test/validation dataset splits (e.g., percentages or sample counts for each split). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | The first one is the regularization parameter γ in Eq.(13)... We set γ to 10 5 for all the datasets. The balance parameter λ, and the coefficient of the Lagrange multipliers μ, play very important roles in the BCLS algorithm and both should be determined carefully. We tune them by grid search from λ {10 3, 10 2, 10 1, 100, 101, 102, 103, 104, 105} and μ {10 3, 10 2, 10 1, 100}. The last one is ρ, the updating rate of μ, and it should be set slightly greater than 1 (Bertsekas 1982). We set ρ to 1.002 for AR and 1.005 for the rest. |