Going Deep: Graph Convolutional Ladder-Shape Networks

Authors: Ruiqi Hu, Shirui Pan, Guodong Long, Qinghua Lu, Liming Zhu, Jing Jiang2838-2845

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have validated the effectiveness of proposed GCLN at a node-wise level with a semi-supervised task (node classification) and an unsupervised task (node clustering), and at a graph-wise level with graph classification by applying a differentiable pooling operation. The proposed GCLN outperforms original GCNs, deep GCNs and other state-of-the-art GCN-based models for all three tasks, which were designed from various perspectives on six real-world benchmark data sets.
Researcher Affiliation Academia 1Centre for Artificial Intelligence, University of Technology Sydney, Australia 2Faculty of IT, Monash University, Australia 3Data61, CSIRO
Pseudocode Yes Algorithm 1 Graph Convolutional Ladder-Shape Networks for Node Classification
Open Source Code No The paper does not contain any explicit statement about releasing the source code for its methodology, nor does it provide a link to a code repository.
Open Datasets Yes We conducted the experiments using three real-world bibliographic data sets: Cora, Citeseer and Pubmed (Sen et al. 2008) and the details of the data set statistics are summarized in Table 1.
Dataset Splits Yes Table 1: Datasets for Node Classification and Clustering # Nodes # Edges # Features # Classes # Training Nodes # Validation Nodes # Test Nodes Label rate [...] Citeseer 3,327 4,732 3,703 6 120 500 1,000 0.036 Cora 2,708 5,429 1,433 7 140 500 1,000 0.052 Pubmed 19,717 443,388 500 3 60 500 1,000 0.003 Specifically, Eight-fold training was used to train the model, one training fold was used as the validation set for hyper-parameter adjustment, and the remaining fold was used for testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We used an eight GCN-layer GCLN (except for the input layer and the layer with softmax) to conduct all the experiments. The first layer consists of 64 neurons and each following layer in the contracting path halves the number of neurons in the previous layer. Because of the symmetric architecture of GCLN, the first layer in the expanding path starts with 8 neurons and each subsequent layer doubles the number of neurons, until 64 neurons are found in the last layer. 0.9 dropout and the Re LU activation function were applied after every graph convolutional operation. The learning rate was retained at 0.01 for all the experiments.