Bayesian Graph Convolutional Neural Networks for Semi-Supervised Classification

Authors: Yingxue Zhang, Soumyasundar Pal, Mark Coates, Deniz Ustebay5829-5836

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We explore the performance of the proposed Bayesian GCNN on three well-known citation datasets (Sen, Namata, and others 2008): Cora, Cite Seer, and Pubmed. Tables 2, 3, 4 show the summary of results on Cora, Citeseer and Pubmed datasets respectively.
Researcher Affiliation Collaboration Yingxue Zhang Huawei Noah s Ark Lab Montreal Research Centre 7101 Avenue du Parc, H3N 1X9 Montreal, QC Canada Soumyasundar Pal, Mark Coates Dept. Electrical and Computer Engineering Mc Gill University 3480 University St, H3A 0E9 Montreal, QC, Canada Deniz Ustebay Huawei Noah s Ark Lab Montreal Research Centre 7101 Avenue du Parc, H3N 1X9 Montreal, QC Canada
Pseudocode Yes Algorithm 1 Bayesian-GCNN
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for their methodology is publicly available.
Open Datasets Yes We explore the performance of the proposed Bayesian GCNN on three well-known citation datasets (Sen, Namata, and others 2008): Cora, Cite Seer, and Pubmed.
Dataset Splits No The data is split into train and test datasets in two different ways. The first is the fixed data split originating from (Yang, Cohen, and Salakhutdinov 2016)... The second type of split is random where the training and test sets are created at random for each run. Note that the implementation of the GAT method as provided by the authors employs a validation set of 500 examples which is used to monitor validation accuracy... We report results without this validation set monitoring...
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions hyperparameters for the GCNN but does not list specific software dependencies or library versions (e.g., TensorFlow, PyTorch, or scikit-learn with version numbers) required for reproduction.
Experiment Setup Yes The hyperparameters of GCNN are the same for all of the experiments and are based on (Kipf and Welling 2017). The GCNN has two layers where the number of hidden units is 16, the learning rate is 0.01, the L2 regularization parameter is 0.0005, and the dropout rate is 50% at each layer. These hyperparameters are also used in the Bayesian GCNN. In addition, the hyperparameters associated with MMSBM inference are set as follows: η = 1, α = 1, ρ = 0.001, n = 500, ϵ0 = 1, τ = 1024 and κ = 0.5.