Improving GNN Calibration with Discriminative Ability: Insights and Strategies

Authors: Yujie Fang, Xin Li, Qianyu Chen, Mingzhong Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental we conduct GNN calibration experiments across multiple datasets using a straightforward example model, denoted as DC(GNN). Its excellent performance confirms the potential of integrating discriminative ability as a key consideration in the calibration of GNNs
Researcher Affiliation Academia Yujie Fang1, Xin Li*1, Qianyu Chen1, Mingzhong Wang2 1 Beijing Institute of Technology 2 University of the Sunshine Coast
Pseudocode No The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block, nor structured steps formatted like code.
Open Source Code No The paper does not include an explicit statement about releasing the source code for the described methodology or a link to a code repository.
Open Datasets Yes We conducted experiments on 8 benchmark datasets, with each experiment consisting of 5 splits (train/val/test: 10%-5%-85%).
Dataset Splits Yes We conducted experiments on 8 benchmark datasets, with each experiment consisting of 5 splits (train/val/test: 10%-5%-85%).
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, memory specifications, or detailed computer configurations used for running the experiments.
Software Dependencies No The paper does not list specific software components with their version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1') or specific solver/package versions that are needed to replicate the experiment.
Experiment Setup No The paper mentions 'For GCL, CRL, and RBS, we performed a grid search to find the optimal hyperparameters' and 'For other baseline models, we followed the experimental settings in GATS'. However, it does not explicitly provide the specific hyperparameter values (e.g., learning rate, batch size, number of epochs, or the detailed architecture of the 'simple MLP' used for g in DC(GNN)) for its own proposed model in the main text.