Multi-View Spectral Clustering with Optimal Neighborhood Laplacian Matrix
Authors: Sihang Zhou, Xinwang Liu, Jiyuan Liu, Xifeng Guo, Yawei Zhao, En Zhu, Yongping Zhai, Jianping Yin, Wen Gao6965-6972
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on 9 datasets demonstrate the superiority of our algorithm against state-of-the-art methods, which verifies the effectiveness and advantages of the proposed ONMSC. |
| Researcher Affiliation | Academia | 1College of Computer, National University of Defense Technology, Changsha 410073, China 2College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha 410073, China 3School of Cyberspace Science, Dongguan University of Technology, Guangdong 523808, China 4School of Electronics Engineering and Computer Science, Peking University, Beijing, China, 100871 |
| Pseudocode | Yes | Algorithm 1 Optimal Neighborhood Multi-View Spectral Clustering |
| Open Source Code | No | The paper states: 'In our experiments, the MATLAB implementation of all the compared algorithms is downloaded from the authors websites.' This refers to code for *other* algorithms, not their own. No explicit statement or link is provided for the source code of the proposed ONMSC method. |
| Open Datasets | Yes | For these datasets, all affinity matrices are pre-computed with carefully designed similarity function and are publicly available from websites123. 1http://mlg.ucd.ie/datasets/bbc.html 2http://mkl.ucsd.edu/dataset/protein-fold-prediction 3http://www.robots.ox.ac.uk/ vgg/data/ |
| Dataset Splits | No | The paper mentions hyperparameter tuning (e.g., 'the optimal neighbor numbers are carefully searched in the range of [0.1s, 0.2s, . . . , s]') and repeating clustering with random initialization, which implies a form of validation, but it does not specify explicit training, validation, and test dataset splits (e.g., percentages, sample counts, or predefined splits). |
| Hardware Specification | Yes | All our experiments are conducted on a desktop computer with a 3.6GHz Intel Core i7 CPU and 64GB RAM, MATLAB 2017a (64bit). |
| Software Dependencies | Yes | All our experiments are conducted on a desktop computer with a 3.6GHz Intel Core i7 CPU and 64GB RAM, MATLAB 2017a (64bit). |
| Experiment Setup | Yes | In our experiments, the MATLAB implementation of all the compared algorithms is downloaded from the authors websites. The hyper-parameters are set according to the suggestions of the corresponding literature. Specially, to all the compared spectral clustering algorithms, the optimal neighbor numbers are carefully searched in the range of [0.1s, 0.2s, . . . , s], where s = n/c is the average sample number in each category. As to our proposed method, the regularization parameter is chosen in the range of [20, 23, . . . , 215]. K-means clustering is adopted on the final representation to assign an appropriate label for each sample. In the experiment, to reduce the effect of randomness caused by k-means, we repeat the clustering process for 50 times with random initialization and report the result with the smallest k-means distortion. |