Graph Neural Stochastic Diffusion for Estimating Uncertainty in Node Classification
Authors: Xixun Lin, Wenxiao Zhang, Fengzhao Shi, Chuan Zhou, Lixin Zou, Xiangyu Zhao, Dawei Yin, Shirui Pan, Yanan Cao
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on multiple detection tasks, demonstrating that GNSD yields the superior performance over existing strong approaches. 5. Experiment In this section, we conduct extensive experiments in the scenario of semi-supervised node classification with the following tasks: out-of-distribution (OOD) detection, misclassification detection and graph structure shifts. For each task, uncertainty estimation plays a critical role. Additionally, we conduct a series of model analyses including model variants, visual study and runtime comparison for comprehensive evaluations. |
| Researcher Affiliation | Collaboration | Xixun Lin 1 Wenxiao Zhang 2 Fengzhao Shi 1 Chuan Zhou 3 4 Lixin Zou 5 Xiangyu Zhao 6 Dawei Yin 7 Shirui Pan 8 Yanan Cao 1 4 1Institute of Information Engineering, Chinese Academy of Sciences 2Beijing Jiaotong University 3Academy of Mathematics and Systems Science, Chinese Academy of Sciences 4School of Cyber Security, University of Chinese Academy of Sciences 5Wuhan University 6City University of Hong Kong 7Baidu Inc. 8Griffith University. Correspondence to: Yanan Cao <caoyanan@iie.ac.cn>. |
| Pseudocode | Yes | D. Training Algorithm The pseudo-code of training algorithm of GNSD is provided in Algorithm 1. Algorithm 1 The training algorithm of GNSD. Input: Undirected graph G with the input feature matrix X. Output: Learned input encoder ϕ, drift network fθ, stochastic forcing network gθ and output decoder η. 1: Initialize model parameters. 2: for # training epochs do 3: Embed X into the representation space as initial node embeddings, i.e., ϕ(X) = H(0) = H0. 4: for n = 0 to N 1 do 5: Hn+1 = Hn + τ A(Hn) I Hn + g(Hn) Wτ. 6: end for 7: Use η to model P(y|H(T)) where H(T) = HN 1. 8: Calculate the distributional uncertainty loss of Eq.(12). 9: Update model parameters by Adam optimizer. 10: end for |
| Open Source Code | No | The paper mentions the implementations of baselines but does not provide a statement or link for the open-source code of their own proposed methodology (GNSD). For example: "GCN-Ensemble is implemented by ourselves. For GRAND4, GREAD5, BGCN6, GKDE7 and GPN8, we refer to their public implementations and adapt them to different tasks." |
| Open Datasets | Yes | Datasets. We evaluate our model on five benchmark graph datasets including one co-purchase graph (Mc Auley et al., 2015) (Amazon-Computers), three citation graphs (Sen et al., 2008) (Cora, Cite Seer and Pubmed) and one academic graph (Hu et al., 2020) (OGBN-Arxiv). |
| Dataset Splits | Yes | For amazon-computers, the training/validation/testing split is 2:1:1. For three citation graphs and OGBN-Arxiv, we use the public data split as suggested in the original papers (Yang et al., 2016; Hu et al., 2020). |
| Hardware Specification | Yes | E.1. Experimental Environment The experiments are conducted on a Linux server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, 128G RAM and NVIDIA Tesla V100. |
| Software Dependencies | Yes | E.1. Experimental Environment ...We implement our model and all baselines with the deep learning library Py Torch (version 1.11) in Python 3.8. |
| Experiment Setup | Yes | E.6. Hyper-parameter Setting We fix the hidden dimensional size, the number of training epochs and time T as 64, 200 and 1.0. For other hyper-parameters, we use grid search to find their optimal values within the hyper-parameter search space shown in Table 6. The best hyper-parameter configurations of each datasets are list here: {0.01, 0.0005, 1, 0.0, 0.3, 0.1} for Amazon-Computers, {0.01, 0.01, 5, 0.0, 0.0, 0.1} for Cora, {0.01, 0.0005, 3, 0.0, 0.0, 0.1} for Cite Seer, {0.1, 0.0005, 3, 0.0, 0.0, 0.1} for Pubmed and {0.01, 0.01, 1, 0.0, 0.0, 0.1} for OGBN-Arxiv. |