A Sparse-Motif Ensemble Graph Convolutional Network against Over-smoothing

Authors: Xuan Jiang, Zhiyong Yang, Peisong Wen, Li Su, Qingming Huang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct a series of theoretical and experimental analyses to demonstrate the superiority of the proposed method.
Researcher Affiliation Academia 1School of Computer Science and Tech., University of Chinese Academy of Sciences, Beijing,China 2Key Lab of Intell. Info. Process., Institute of Computing Technology, CAS, Beijing, China 3Peng Cheng Laboratory, Shenzhen, China {jiangxuan20, wenpeisong20}@mails.ucas.ac.cn, {yangzhiyong21, suli, qmhuang}@ucas.ac.cn
Pseudocode No The paper describes algorithms and formulations in text and mathematical equations but does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/Bolo JX/SMEGCN
Open Datasets Yes Coauthorship graphs includes Coauthor CS and Coauthor Physics [Shchur et al., 2018]. In each graph, the node set consists of authors, while the edges are determined by whether two authors have co-authored. The features are the keywords of the authors publications. Co-purchase graphs includes Amazon Computers and Amazon Photo [Shchur et al., 2018]. The node set consists of different goods, and two goods are connected by an edge if they are frequently bought together. Web networks graphs [Pei et al., 2020] includes Cornell, Texas and Wisconsin.
Dataset Splits Yes For a fair comparison, we follow the official data split. Other hyperparameters are tuned according to the performances on the validation sets. The checkpoint with the best performance on the validation set is preserved to evaluate the corresponding performance on the test set.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'mindspore, which is a new AI computing framework' and provides a URL, but it does not specify a version number for mindspore or any other software dependencies.
Experiment Setup Yes Training strategy. Our model and the competitors are trained by minimizing the cross-entropy loss with the Adam optimizer. Concretely, the initial learning rate is set to 0.01, and the weight decay factors for the first two linear layers are respectively taken from {0.00001, 0.00005, 0.005} and {0, 0.0005}. Other hyperparameters are tuned according to the performances on the validation sets. For our model, we add dropout layers with the probability searched from [0, 0.9] with an interval of 0.1. Hyperparameters of competitors are also tuned similarly. The models are trained for at most 1000 epochs.