Multi-level Graph Contrastive Prototypical Clustering

Authors: Yuchao Zhang, Yuan Yuan, Qi Wang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four benchmarks demonstrate the superiority of the proposed MLG-CPC against the stateof-the-art graph clustering approaches. and 4 Experiments In following sections, we first introduce experimental setups. Second, we conduct extensive experiments of graph clustering and verify the effectiveness of different sub-modules.
Researcher Affiliation Academia 1School of Computer Science, Northwestern Polytechnical University, Xi an 710072, Shaanxi, P. R. China 2School of Artificial Intelligence, Optics and Electronics (i OPEN), Northwestern Polytechnical University, Xi an 710072, Shaanxi, P. R. China
Pseudocode Yes Algorithm 1 The training procedure of MLG-CPC
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a repository for the MLG-CPC method. Footnote 1 points to a third-party library (DGL), not the authors' implementation.
Open Datasets Yes We adopt four commonly used graph datasets [Shchur et al., 2018] in our experiments including CITE, ACM, DBLP, and AMAP.
Dataset Splits No The paper does not explicitly provide details about train/validation/test dataset splits, such as percentages, sample counts, or references to predefined splits for reproducibility. While 'evaluation testing' is mentioned, the data partitioning method is not described.
Hardware Specification Yes We implement our MLG-CPC on Py Torch platform and Deep Graph Library (DGL)1 with the NVIDIA Ge Force RTX 3090.
Software Dependencies No The paper mentions 'Py Torch platform' and 'Deep Graph Library (DGL)' but does not provide specific version numbers for these software components, which are necessary for reproducible descriptions.
Experiment Setup Yes For all datasets, we set the learning rate at 1e-3, τ at 0.2, and λ at 5e-4. We use grid-search to find the optimal graph augmentation parameters, i.e., edge dropping and feature masking rates ranging from 0 to 1. The training procedure of MLG-CPC is optimized until reaching max epochs. We set max epochs at 40, 200, 400, 1000 for CITE, AMAP, ACM, and DBLP, respectively.