SURER: Structure-Adaptive Unified Graph Neural Network for Multi-View Clustering
Authors: Jing Wang, Songhe Feng, Gengyu Lyu, Jiazheng Yuan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on diverse datasets demonstrate the superior effectiveness of our method compared to other state-of-the-art approaches. |
| Researcher Affiliation | Academia | Jing Wang1, Songhe Feng1*, Gengyu Lyu2, Jiazheng Yuan3 1Key Laboratory of Big Data and Artificial Intelligence in Transportation (Ministry of Education), School of Computer and Information Technology, Beijing Jiaotong University 2Engineering Research Center of Intelligence Perception and Autonomous Control (Ministry of Education), Beijing University of Technology 3 College of Science and Technology, Beijing Open University {jing w, shfeng}@bjtu.edu.cn, lyugengyu@bjut.edu.cn, jzyuan@139.com |
| Pseudocode | Yes | Algorithm 1: The Algorithm of SURER |
| Open Source Code | No | The paper does not include a statement or link indicating that the source code for SURER is openly available. |
| Open Datasets | Yes | In our experiments, we employ eight widely-used multi-view datasets to evaluate the performance of SURER, whose detailed characteristics are illustrated in Table 1. MSRCv1 (Winn and Jojic 2005)... BBCSports (Greene and Cunningham 2006)... 100leaves (Wang, Yang, and Liu 2019)... Mfeat (Wang, Yang, and Liu 2019)... Scene15 (Li and Perona 2005)... VOC (Hwang and Grauman 2010)... Hdigit (Chen et al. 2022)... Noisyminist (Wang et al. 2015)... |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits, specific percentages, or a detailed splitting methodology needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or library names with version numbers needed to replicate the experiment. |
| Experiment Setup | Yes | Implementation Details. For all datasets, the view-specific graph encoder module consists of a three-layer graph convolutional encoder with a Re LU activation function, and the dimension is set as {dv, 512, 2048, 256}. The view-specific feature decoders are formed by four fully-connected layers and the dimensions are respectively set as {256, 2048, 512, dv}, where RELU is selected as the activation function. For the hyperparameters (i.e., λ1 and λ2) configuration, a grid searching method is adopted to select the optimal values from {0.01, 0.1, 0.5, 1.0, 2.0, 5.0, 10.0}. In order to enhance the robustness of refined attribute graphs, we first conduct an initial pre-training phase for the GSL module spanning some epochs. Subsequently, we optimize GSL and the HGNN jointly. |