Uncertainty Quantification over Graph with Conformalized Graph Neural Networks

Authors: Kexin Huang, Ying Jin, Emmanuel Candes, Jure Leskovec

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that CF-GNN achieves any pre-defined target marginal coverage while significantly reducing the prediction set/interval size by up to 74% over the baselines.
Researcher Affiliation Academia Kexin Huang1 Ying Jin2 Emmanuel Candès2,3 Jure Leskovec1 1 Department of Computer Science, Stanford University 2 Department of Statistics, Stanford University 3 Department of Mathematics, Stanford University
Pseudocode Yes Algorithm 1: Pseudo-code for CF-GNN algorithm.
Open Source Code Yes The code is available at https://github.com/snap-stanford/conformalized-gnn.
Open Datasets Yes For node classification, we use the common node classification datasets in Pytorch Geometric package. For node regression, we use datasets in [20].
Dataset Splits Yes For node classification, we follow a standard semi-supervised learning evaluation procedure [24], where we randomly split data into folds with 20%/10%/70% nodes as Dtrain/Dvalid/Dcalib Dtest.
Hardware Specification Yes Each experiment is done with a single NVIDIA 2080 Ti RTX 11GB GPU.
Software Dependencies No The paper mentions 'Pytorch Geometric package' but does not specify version numbers for any software dependencies.
Experiment Setup Yes Table 5: Hyperparameter range for CF-GNN. Task Param. Range Classification GNNϑ Hidden dimension [16,32,64,128,256] Learning rate [1e-1, 1e-2, 1e-3, 1e-4] GNNϑ Number of GNN Layers [1,2,3,4]