Cluster-Guided Contrastive Graph Clustering Network
Authors: Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Wenxuan Tu, Qun Zheng, Xinwang Liu, Liming Fang, En Zhu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms. |
| Researcher Affiliation | Academia | 1College of Computer, National University of Defense Technology, Changsha, China 2College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China 3University of Science and Technology of China 4 Nanjing University of Aeronautics and Astronautics |
| Pseudocode | Yes | Algorithm 1: CCGC |
| Open Source Code | Yes | The code and appendix of CCGC are available at https://github.com/xihongyang1999/CCGC on Github. |
| Open Datasets | Yes | The experiments are conducted on six widely-used benchmark datasets, including CORA (Cui et al. 2020), CITESEER (Cui et al. 2020), BAT (Liu et al. 2022e; Mrabah et al. 2021), EAT (Liu et al. 2022e), UAT (Liu et al. 2022e), AMAP (Liu et al. 2022c). The summarized information is shown in Table 2. |
| Dataset Splits | No | The paper describes a two-stage training strategy and parameters like max epochs, but does not explicitly provide information on dataset splits for training, validation, or testing (e.g., percentages or counts of samples for each split). |
| Hardware Specification | Yes | The experimental environment contains one desktop computer with the Intel Core i7-7820x CPU, one NVIDIA Ge Force RTX 2080Ti GPU, 64GB RAM, and the Py Torch deep learning platform. |
| Software Dependencies | No | The paper mentions "Py Torch deep learning platform" but does not specify a version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | The max training epoch number is set to 400. We minimize the total loss in Eq. (11) with widely-used Adam optimizer (Kingma and Ba 2014) and then perform K-means over the learned embeddings. To obtain reliable clustering, we adopt a two-stage training strategy. ... The hyper-parameter settings are summarized in Table 1 of the Appendix. |