Bayesian Graph Neural Networks with Adaptive Connection Sampling

Authors: Arman Hasanzadeh, Ehsan Hajiramezanali, Shahin Boluki, Mingyuan Zhou, Nick Duffield, Krishna Narayanan, Xiaoning Qian

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results with ablation studies on benchmark datasets validate that adaptively learning the sampling rate given graph training data is the key to boosting the performance of GNNs in semi-supervised node classification, making them less prone to over-smoothing and over-fitting with more robust prediction.
Researcher Affiliation Academia 1Electrical and Computer Engineering Department, Texas A&M University, College Station, Texas, USA 2Mc Combs School of Business, The University of Texas at Austin, Austin, Texas, USA.
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/armanihm/GDC
Open Datasets Yes We consider Cora, Citeseer and Cora-ML datasets, and preprocess and split them same as Kipf & Welling (2017) and Bojchevski & Gunnemann (2018).
Dataset Splits Yes We train beta-Bernoulli GDC (BBGDC) models for 2000 epochs with a learning rate of 0.005 and a validation set used for early stopping. We consider Cora, Citeseer and Cora-ML datasets, and preprocess and split them same as Kipf & Welling (2017) and Bojchevski & Gunnemann (2018).
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions implementing GCNs and using specific techniques but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes We train beta-Bernoulli GDC (BBGDC) models for 2000 epochs with a learning rate of 0.005 and a validation set used for early stopping. All of the hidden layers in our implemented GCNs have 128 dimensional output features. We use 5 × 10−3, 10−2, and 10−3 as L2 regularization factor for Cora, Citeseer, and Cora-ML, respectively. For the GCNs with more than 2 layers, we use warm-up during the first 50 training epochs to gradually impose the beta-Bernoulli KL term in the objective function. The temperature in the concrete distribution is set to 0.67.