Robust Graph-Based Multi-View Clustering

Authors: Weixuan Liang, Xinwang Liu, Sihang Zhou, Jiyuan Liu, Siwei Wang, En Zhu7462-7469

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets verify the superiority of the proposed method against the compared state-of-the-art algorithms.
Researcher Affiliation Academia 1 College of Computer, National University of Defense Technology, Changsha, Hunan, China. 2 College of Intelligence Science and Technology, National University of Defense Technology, Changsha, Hunan, China.
Pseudocode Yes The complete procedure of the proposed RG-MVC is summarized in Algorithm 1.
Open Source Code Yes Our codes and appendix are available at https://github.com/wxliang/RG-MVC.
Open Datasets Yes Seven benchmark datasets are adopted to demonstrate the effectiveness of the proposed method, including Flo171, Flo1022, DIGIT3, Mfeat4, Cal1025, PFold6 and YALE7. ... 1www.robots.ox.ac.uk/ vgg/data/flowers/17/ 2www.robots.ox.ac.uk/ vgg/data/flowers/102/ 3http://ss.sysu.edu.cn/py/ 4https://archive.ics.uci.edu/ml/datasets/Multiple+Features 5www.vision.caltech.edu/Image Datasets/Caltech101/ 6mkl.ucsd.edu/dataset/protein-fold-prediction 7http://vision.ucsd.edu/content/yale-face-database
Dataset Splits No The paper mentions using benchmark datasets and repeating experiments for randomness, but it does not explicitly define specific training, validation, or test splits for the data. The evaluation is primarily based on clustering performance on the entire datasets.
Hardware Specification Yes All the experiments are conducted on a desktop computer with Intel(R) Core(TM)-i7-7820X CPU and 64G RAM.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes For all experiments, we set the number of clusters to the true class number of the corresponding dataset. ... To get rid of the adverse effect of the randomness of k-means clustering evaluation, we repeat this process for 50 times and record their average values as final clustering results.