HC-GAE: The Hierarchical Cluster-based Graph Auto-Encoder for Graph Representation Learning

Authors: Lu Bai, Zhuo Xu, Lixin Cui, Ming Li, Yue Wang, Edwin Hancock

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed HC-GAE can generate effective representations for either node classification or graph classification, and the experiments demonstrate the effectiveness on real-world datasets.
Researcher Affiliation Academia 1School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China 2Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing Normal University, Beijing 100875, China 3School of Information, Central University of Finance and Economics, Beijing 100081, China 4Zhejiang Institute of Optoelectronicss, Jinhua 321004, China 5Zhejiang Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, Jinhua 321004, China 6Department of Computer Science, University of York, York YO10 5GH, United Kingdom
Pseudocode No The paper describes the methodology using mathematical formulations and figures, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes The source code is available on https://github.com/Jonathan GXu/HC-GAE
Open Datasets Yes For node classification, we employ five real-world datasets, i.e., Cora, Cite Seer, Pub Med, Amazon-Computers and Coauthor-CS. ... For graph classification, we adopt five standard graph datasets, i.e., IMDBB, IMDB-M, PROTEINS, COLLAB and MUTAG.
Dataset Splits Yes For either the proposed HC-GAE or the alternative baselines, we adopt the 10-fold cross validation to compute classification accuracies, and follow the original setups for different baselines.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies, libraries, or programming languages used (e.g., Python version, PyTorch/TensorFlow version).
Experiment Setup Yes Specifically, for the HC-GAE, we select the Adam optimizer to train the parameters, and set the epoch, hidden dimension and dropout as 50, 128 and 0.5, respectively. For the encoder and decoder of the HC-GAE, we set their greatest layer numbers L and L as 3, and their node numbers for the 3 layers as {128, 64, 32} and {32, 64, 128}. Moreover, we set the batch size and the learning rate as 1024 and 1e 2 for node classification, and 64 and 5e 4 for graph classification.