Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration
Authors: Xiao Wang, Hongrui Liu, Chuan Shi, Cheng Yang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of our proposed model in terms of both calibration and accuracy. |
| Researcher Affiliation | Academia | Xiao Wang, Hongrui Liu, Chuan Shi , Cheng Yang School of Computer Science (National Pilot Software Engineering School) Beijing University of Posts and Telecommunications Beijing, China {xiaowang, liuhongrui, shichuan, yangcheng}@bupt.edu.cn |
| Pseudocode | No | The paper includes figures illustrating frameworks but does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We choose the commonly used citation networks Cora [29], Citeseer [29], Pubmed [29] and Cora Full [3] for evaluation, and more detailed descriptions are in Appendix B. |
| Dataset Splits | Yes | Given an unlabeled dataset DU and a labeled dataset DL which has been divided into three parts Dtrain, Dval and Dtest, we firstly train a classification GCN using Dtrain to get the logit of each node. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions using GCN and GAT, and implementing post-hoc calibration techniques, but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | For our Ca GCN, we train a two-layer GCN with the hidden layer dimension to be 16. We set λ = 0.5 for all datasets, weight decay to be 5e-3 for Cora, Citeseer, Pubmed and 0.03 for Cora Full. Other parameters of Ca GCN follows [16]. We set the learning rate lr = 0.001 for Ca GCN-st and train our Ca GCN-st 200 epochs for Cora, 150 epochs for Citeseer, 100 epochs for Pubmed and 500 epochs for Cora Full. We set the threshold τ {0.8,0.85,0.9,0.95,0.99} and the maximum number of stage s = 10. |