Multi-Class Imbalanced Graph Convolutional Network Learning

Authors: Min Shi, Yufei Tang, Xingquan Zhu, David Wilson, Jianxun Liu

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world imbalanced graphs demonstrate that DR-GCN outperforms the state-of-the-art methods in node classification, graph clustering, and visualization.
Researcher Affiliation Academia 1Department of Computer & Electrical Engineering and Computer Science, Florida Atlantic University, USA 2School of Computer Science and Engineering, Hunan University of Science and Technology, China
Pseudocode Yes Algorithm 1: Training the DR-GCN model
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We use four widely-used benchmark graph datasets [Wu et al., 2020], including Cora, Citeseer, Pubmed, and DBLP.
Dataset Splits Yes The remaining nodes are split into validation and testing sets where 10% are used for hyperparameter optimization, and 90% are used for testing respectively.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes For GCN-based methods, we set the hidden embedding size r as 10, the dropout rate as 0.3, the L2 norm regularization weight decay as 0.03 and the learning rate for the gradient decent algorithm as 0.002. We set the maximum training epoch I as 1000 with an early stopping of 200. In our approach, the default values for M, N and α are set as 1, |Vl|/2 and 0.7, where |Vl| is the total number of labeled nodes.