Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning

Authors: Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the effectiveness of Graph Heat on three benchmarks. A detailed analysis about the influence of hyper- parameter is also conducted. Lastly, we show a case to intuitively demonstrate the strengths of our method. and Experiments show that Graph Heat achieves state-of-the-art results in the task of graph-based semi-supervised classification across benchmark datasets: Cora, Citeseer and Pubmed.
Researcher Affiliation Academia Bingbing Xu , Huawei Shen , Qi Cao , Keting Cen and Xueqi Cheng CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China {xubingbing, shenhuawei, caoqi, cenketing, cxq}@ict.ac.cn
Pseudocode No The paper describes the model architecture and equations in Section 3.4, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not contain any explicit statements about making the source code available, nor does it provide a link to a code repository.
Open Datasets Yes We conduct experiments on three benchmark datasets, namely, Cora, Citeseer and Pubmed [Sen et al., 2008].
Dataset Splits Yes The partition of datasets is the same as GCN [Kipf and Welling, 2017] with an additional validation set of 500 labeled samples to determine hyper-parameters.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies No The paper mentions using the Adam optimizer and dropout, but it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries).
Experiment Setup Yes We train a two-layer Graph Heat with 16 hidden units, and prediction accuracy is evaluated on a test set of 1000 labeled nodes. ... Weights are initialized following [Glorot and Bengio, 2010]. We adopt the Adam optimizer [Kingma and Ba, 2014] for parameter optimization with an initial learning rate lr = 0.01. ... The optimal hyper-parameters, e.g., scaling parameter s and threshold ϵ, are chosen through validation set. For Cora, s = 3.5 and ϵ = 1e 4. For Citeseer, s = 4.5 and ϵ = 1e 5. For Pubmed, s = 3.0 and ϵ = 1e 5. To avoid overfitting, dropout [Srivastava et al., 2014] is applied and the value is set as 0.5. The training process is terminated if the validation loss does not decrease for 200 consecutive epochs.