Attributed Graph Clustering with Dual Redundancy Reduction

Authors: Lei Gong, Sihang Zhou, Wenxuan Tu, Xinwang Liu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments have demonstrated that AGC-DRR outperforms the state-of-the-art clustering methods on most of our benchmarks. The corresponding code is available at https://github.com/gongleii/AGC-DRR.
Researcher Affiliation Academia Lei Gong , Sihang Zhou , Wenxuan Tu and Xinwang Liu National University of Defense Technology, Changsha, China glnudt@163.com, xinwangliu@nudt.edu.cn
Pseudocode Yes Algorithm 1 The training procedure of AGC-DRR Input: Graph data {A, X}; Number of clusters K; Maximum iterations T; Hyper-parameter λ Output: Clustering results 1: for t = 1 : T do 2: Calculate W and A to obtain the structure augmented graph by Eq. (9) and Eq. (10), respectively; / Fix N2 and optimize N1 / 3: Calculate C1 and C2 by Eq. (4); 4: Update N1 by minimizing the objective in Eq. (11). / Fix N1 and optimize N2 / 5: Calculate Z1 and Z2 by Eq. (1); 6: Calculate C1 and C2 by Eq. (4); 7: Update N2 by maximizing the objective in Eq. (12). 8: end for 9: Obtain clustering results over the average of C1 and C2 10: return Clustering results
Open Source Code Yes The corresponding code is available at https://github.com/gongleii/AGC-DRR.
Open Datasets Yes We evaluate the proposed AGC-DRR on four public benchmark datasets including ACM1, DBLP2, CITE3, and AMAP [Shchur et al., 2018]. 1https://dl.acm.org/ 2https://dblp.uni-trier.de 3http://citeseerx.ist.psu.edu/index
Dataset Splits No The paper does not explicitly state the training, validation, and test dataset splits (e.g., percentages or sample counts).
Hardware Specification Yes We conduct experiments to evaluate the proposed AGC-DRR on the Py Torch platform with the NVIDIA Ge Force RTX 3080.
Software Dependencies No We conduct experiments to evaluate the proposed AGC-DRR on the Py Torch platform with the NVIDIA Ge Force RTX 3080.
Experiment Setup Yes We train AGC-DRR on all benchmark datasets for at least 100 iterations until convergence. ... For our proposed AGC-DRR, we optimize it with the Adam optimizer, the learning rates for N1 and N2 are set to 1e-3 and 1e-4 on CITE, and 1e-4, 5e-4 on others, respectively. The regularized hyper-parameter λ is set as 1 for all datasets.