Deep Contrastive Graph Learning with Clustering-Oriented Guidance

Authors: Mulin Chen, Bocheng Wang, Xuelong Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on several benchmark datasets demonstrate the superiority of DCGL against stateof-the-art algorithms.
Researcher Affiliation Academia Mulin Chen1,2, Bocheng Wang1,2, Xuelong Li1,2* 1 School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University, Xi an 710072, Shanxi, China 2 Key Laboratory of Intelligent Interaction and Applications, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi an 710072, Shanxi, China
Pseudocode Yes Algorithm 1: DCGL
Open Source Code No The paper does not provide a statement or link indicating the availability of its own source code.
Open Datasets Yes Seven publicly available datasets are collected as benchmarks, including regular record types TOX-171 and Isolet, image types ORL, Yale B, PIE and USPS, and text type TR41.
Dataset Splits No The paper lists the total number of samples for each dataset but does not explicitly provide specific train/validation/test dataset splits or their percentages.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions optimizers and clustering algorithms but does not provide specific software names with version numbers.
Experiment Setup Yes For DCGL, the hyper-parameters α, β, and γ are fixed to 1, 10^3, and 2 × 10^3 respectively. The neighbor number for LPG rises every 6 epochs. The maximum epoch number is 30. To ensure objectivity, the random seed is fixed before code execution, and each algorithm is repeated 10 times.