DVSAI: Diverse View-Shared Anchors Based Incomplete Multi-View Clustering
Authors: Shengju Yu, Siwei Wang, Pei Zhang, Miao Wang, Ziming Wang, Zhe Liu, Liming Fang, En Zhu, Xinwang Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, comprehensive experiments confirm the effectiveness and advantages of DVSAI. Experiments Datasets, Baselines and Setup Six datasets are utilized in the experiments. Caltech101-7 and Caltech101-20 are two image datasets with small size. CCV and SUNRGBD are the video and the 3D datasets with medium size. NUSWIDEOBJ and Youtube Face Sel are large-size web-page and video datasets respectively. Their detailed descriptions are presented in Table 1. To demonstrate the superiority of DVSAI, we compare it with the following eleven strong baselines: BSV(Jordan and Weiss 2002), MIC (Shao, He, and Yu 2015), MKKM-IK (Liu et al. 2017), UEAF (Wen et al. 2019), IK-MKC (Liu et al. 2020), FLSD (Wen et al. 2021), IMVC-CBG (Wang et al. 2022b), LSIMVC (Liu et al. 2022a), BGIMVSC (Sun et al. 2023), PIMVC (Deng et al. 2023), HCLS (Wen et al. 2023). We tune the hyper-parameter β in [2 4, 2 3, , 23, 24] and γ in [102, 103, 104, 105]. We set the dimension and size of anchors lt and mt in space t to be the same, both as tk, and the number of spaces T as 5. Three metrics, ACC, NMI, Purity, are employed to assess the clustering performance. Experimental Results Table 2 and 3 present the clustering results on six datasets under the missing rate p = 0.1, 0.3, 0.5, 0.7, where denotes the memory overflow failure. We can have that: (1) Our DVSAI displays obvious advantages over these advanced IMVC competitors. Especially under the missing rate 0.1, it exceeds all compared approaches in ACC, NMI and PUR. Moreover, on datasets Caltech101-20 and CCV, it makes the best results. On other datasets and missing rates, it also can generate comparable results. These give well evidence of the effectiveness of our DVSAI. |
| Researcher Affiliation | Academia | Shengju Yu1, Siwei Wang2*, Pei Zhang1, Miao Wang2*, Ziming Wang3, Zhe Liu4, Liming Fang5, En Zhu1, Xinwang Liu1* 1School of Computer, National University of Defense Technology, Changsha, 410073, China 2Intelligent Game and Decision Lab, Beijing, 100071, China 3China Academy of Aerospace Science and Innovation, Beijing, 100176, China 4Zhejiang Lab, Hangzhou, 311500, China 5Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China |
| Pseudocode | Yes | Algorithm 1: DVSAI Input:Partial views {Xv}V v=1, index matrices {Sv}V v=1, parameters β, γ; Initialize: {Pv,t}V ,T v=1,t=1, {At}T t=1, {Gt}T t=1, α; 1: repeat 2: Solving every Pv,t by Eq. (8); 3: Solving every At by Eq. (13); 4: Solving every Gt by Eq. (15); 5: Solving α by Eq. (21); 6: until convergent 7: Integrating {Gt}T t=1 by Eq. (24); 8: Generating U by running SVD on L; Output: Clustering indicators by running k-means on U; |
| Open Source Code | No | No explicit statement or link regarding the public availability of source code for the described methodology was found. |
| Open Datasets | Yes | Experiments Datasets, Baselines and Setup Six datasets are utilized in the experiments. Caltech101-7 and Caltech101-20 are two image datasets with small size. CCV and SUNRGBD are the video and the 3D datasets with medium size. NUSWIDEOBJ and Youtube Face Sel are large-size web-page and video datasets respectively. Their detailed descriptions are presented in Table 1. |
| Dataset Splits | No | The paper states 'We tune the hyper-parameter β in [2 4, 2 3, , 23, 24] and γ in [102, 103, 104, 105]', implying some form of hyper-parameter tuning, but does not provide explicit training/validation/test dataset splits (e.g., percentages or specific counts for each split). |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned in the paper. |
| Experiment Setup | Yes | We tune the hyper-parameter β in [2 4, 2 3, , 23, 24] and γ in [102, 103, 104, 105]. We set the dimension and size of anchors lt and mt in space t to be the same, both as tk, and the number of spaces T as 5. |