Bilinear Graph Neural Network with Neighbor Interactions

Authors: Hongmin Zhu, Fuli Feng, Xiangnan He, Xiang Wang, Yan Li, Kai Zheng, Yongdong Zhang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on three public benchmarks of semi-supervised node classification verify the effectiveness of BGNN BGCN (BGAT) outperforms GCN (GAT) by 1.6% (1.5%) in classification accuracy.
Researcher Affiliation Collaboration 1University of Science and Technology of China 2National University of Singapore 3Beijing Kuaishou Technology Co., Ltd. Beijing, China 4University of Electronic Science and Technology of China
Pseudocode No The paper describes mathematical formulations and a model framework, but it does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Codes are available at: https://github.com/zhuhm1996/bgnn.
Open Datasets Yes Following previous works [Sen et al., 2008; Yang et al., 2016; Veliˇckovi c et al., 2018], we utilize three benchmark datasets of citation network Pubmed, Cora and Citeseer [Sen et al., 2008].
Dataset Splits Yes That is, 20 labeled nodes per class are used for training. 500 nodes and 1000 nodes are used as validation set and test set, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not specify the version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow, etc.) used in the experiments.
Experiment Setup Yes The dropout rates, λ, β and α are selected within [0, 0.2, 0.4, 0.6], [0, 1e-4, 5e-4, 1e-3], [0, 0.1, 0.3, , 0.9, 1] and [0, 0.1, 0.3, , 0.9, 1], respectively. All BGNN-based models are trained for 2,000 epochs with an early stopping strategy based on both convergence behavior and accuracy of the validation set.