Graph Filter-based Multi-view Attributed Graph Clustering
Authors: Zhiping Lin, Zhao Kang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments indicate that our method works surprisingly well with respect to state-of-the-art deep neural network methods. |
| Researcher Affiliation | Academia | School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 201921080534@std.uestc.edu.cn, Zkang@uestc.edu.cn |
| Pseudocode | Yes | Algorithm 1 Mv AGC |
| Open Source Code | Yes | The source code is available at https: //github.com/sckangz/Mv AGC. |
| Open Datasets | Yes | To demonstrate the effectiveness of our method, we select five benchmark datasets to evaluate the performance. Among them, ACM, DBLP, and IMDB [Fan et al., 2020] consist of one feature matrix and multiple graphs. Amazon Photo and Amazon Computer [Shchur et al., 2018] consist of one feature matrix and one graph. |
| Dataset Splits | No | The paper refers to using datasets and evaluating performance, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or specific methodologies for creating these splits). |
| Hardware Specification | Yes | All methods are conducted on the same machine with an Intel(R) Core(TM) i7-6800k 3.40GHZ CPU, an Ge Force GTX 1080 Ti GPU and 32GB RAM. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as programming languages or library names with their version numbers. |
| Experiment Setup | Yes | For our Mv AGC, we set f(A) = A+A2 and tune the parameters to obtain the best results. We adopt four widely used metrics: Accuracy(ACC), Normalized Mutual Information(NMI), F1-score(F1), Adjusted Rand Index(ARI). For our Mv AGC, we set f(A) = A+A2 and tune the parameters to obtain the best results. We found that w has little influence to the results, so we set w=3 for all experiments. We can observe that k = 3 is good enough to ensure promising results. |