Hard Sample Aware Network for Contrastive Deep Graph Clustering
Authors: Yue Liu, Xihong Yang, Sihang Zhou, Xinwang Liu, Zhen Wang, Ke Liang, Wenxuan Tu, Liang Li, Jingcan Duan, Cancan Chen
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and analyses demonstrate the superiority and effectiveness of our proposed method. |
| Researcher Affiliation | Academia | 1College of Computer, National University of Defense Technology 2College of Intelligence Science and Technology, National University of Defense Technology 3Northwestern Polytechnical University 4Beijing Information Science and Technology University |
| Pseudocode | Yes | Algorithm 1: Hard Sample Aware Network |
| Open Source Code | Yes | The source code of HSAN is shared at https://github.com/yueliu1999/HSAN |
| Open Datasets | Yes | To evaluate the effectiveness of our proposed HSAN, we conduct experiments on six benchmark datasets, including CORA, CITE, Amazon Photo (AMAP), Brazil Air Traffic (BAT), Europe Air-Traffic (EAT), and USA Air Traffic (UAT). |
| Dataset Splits | No | The paper does not explicitly describe specific training, validation, and test dataset splits, such as percentages or sample counts for the input data. |
| Hardware Specification | Yes | All experimental results are obtained from the desktop computer with the Intel Core i7-7820x CPU, one NVIDIA Ge Force RTX 2080Ti GPU, 64GB RAM, and the Py Torch deep learning platform. |
| Software Dependencies | No | The paper mentions 'Py Torch deep learning platform' but does not specify a version number or other software dependencies with version numbers. |
| Experiment Setup | Yes | The training epoch number is set to 400...both the attribute encoders and structure encoders are two parameters un-shared one-layer MLPs with 500 hidden units for UAT/AMAP and 1500 hidden units for other datasets. The learnable trade-off α is set to 0.99999 as initialization and reduces to around 0.4 in our experiments. |