EMGC²F: Efficient Multi-view Graph Clustering with Comprehensive Fusion

Authors: Danyang Wu, Jitao Lu, Feiping Nie, Rong Wang, Yuan Yuan

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on several benchmark datasets demonstrate that our proposals outperform SOTA competitors both in effectiveness and efficiency.
Researcher Affiliation Academia Danyang Wu1,2 , Jitao Lu1,2 , Feiping Nie1,2 , Rong Wang2 and Yuan Yuan2 1School of Computer Science, Northwestern Polytechnical University, Xi an 710072, P. R. China. 2School of Artificial Intelligence, Optics and Electronics (i OPEN), and the Key Laboratory of Intelligent Interaction and Applications (Ministry of Industry and Information Technology), Northwestern Polytechnical University, Xi an 710072, P. R. China.
Pseudocode Yes Algorithm 1 The Algorithm for Problem (9)
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes To evaluate the performance of our proposals, we employ 4 popular benchmark datasets including COIL20 [Nene et al., ], DIGIT10 [Wu et al., 2021a], MSRC [Lee and Grauman, 2009], ORL [Samaria and Harter, 1994] and they are summarized into Table 1.
Dataset Splits No The paper mentions evaluating on benchmark datasets and running K-means '100 times' for post-processing for some competitors, but does not specify train, validation, or test data splits for its experiments.
Hardware Specification Yes All the experiments are implemented in MATLAB R2020b on a desktop with Intel i7-7700k @ 4.2GHz CPU and 32GB RAM.
Software Dependencies Yes All the experiments are implemented in MATLAB R2020b
Experiment Setup Yes Following [Nie et al., 2017b], we construct KNN graph and set the number of nearest neighbors as 10. For AMGL and MEA that need K-means as post-processing, we run K-means 100 times and record results with minimum objective value. For OP-LFMVC, we utilize the multi-view spectral embeddings as input pre-processed matrix.