Why Do Attributes Propagate in Graph Convolutional Neural Networks?

Authors: Liang Yang, Chuan Wang, Junhua Gu, Xiaochun Cao, Bingxin Niu4590-4598

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superior performance of GCC. Evaluations In this section, the performance of our proposed GCC and GCA is experimentally evaluated on transductive and inductive semi-supervised node classification task. Table 2: Comparison on transductive node classification in terms of AC (%). Table 3: Results on PPI.
Researcher Affiliation Academia 1School of Artificial Intelligence, Hebei University of Technology, Tianjin, China 2State Key Laboratory of Information Security, Institute of Information Engineering, CAS, Beijing, China 3Hebei Province Key Laboratory of Big Data Calculation, Hebei University of Technology, Tianjin, China
Pseudocode No The paper describes the proposed method using equations and textual descriptions, but it does not include a formally structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Cora, Citeseer, and Pubmed are citation network benchmark datasets (Sen et al. 2008), Texas, Cornell and Wisconsin are webpage networks from Web KB. Chameleon is a page-page network on specific topics in Wikipedia For inductive learning task, 24 Protein-Protein Interaction (PPI) networks are employed (Hamilton, Ying, and Leskovec 2017).
Dataset Splits Yes For transductive learning task, the fixed split for training, validation and testing introduced in (Yang, Cohen, and Salakhutdinov 2016), i.e., 20 nodes per class for training, 500 nodes for validation and 1,000 nodes for testing, are adopted for three citation network Cora, Citeseer, and Pubmed. For each webpage network, i.e., Chameleon, Texas, Cornell and Wisconsin, nodes in each class is randomly split into 60%, 20%, and 20% for training, validation and testing.
Hardware Specification No The paper does not provide specific details about the hardware used, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions 'Adam SGD optimizer (Kingma and Ba 2015)' but does not specify version numbers for programming languages, libraries, or other software dependencies.
Experiment Setup Yes Parameter Setting: Adam SGD optimizer (Kingma and Ba 2015) is adopted with learning rate as 0.001. Besides, early stopping with a patience of 100 epochs and ℓ2 regularization (0.0006) is employed to prevent overfitting. γt = 0.1 and κt = 0.2 for transductive learning, while γt = 0.45 and κt = 0.32 in inductive learning. Similar to GCNII (Chen et al. 2020) identity mapping is employed to enhance the learnable mapping W. The number of layers (depth) is selected from 8, 16 and 32.