Improvements on Uncertainty Quantification for Node Classification via Distance Based Regularization
Authors: Russell Hart, Linlin Yu, Yifei Lou, Feng Chen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive comparison experiments on eight standard datasets and demonstrate that the proposed regularization outperforms the state-of-the-art in both OOD detection and misclassification detection. |
| Researcher Affiliation | Academia | Russell Alan Hart The University of Texas at Dallas rah150030@utdallas.edu Linlin Yu The University of Texas at Dallas linlin.yu@utdallas.edu Yifei Lou University of North Carolina at Chapel Hill yflou@unc.edu Feng Chen The University of Texas at Dallas feng.chen@utdallas.edu |
| Pseudocode | No | The paper describes algorithms and models mathematically and textually but does not include a distinct pseudocode block or an algorithm labeled as such. |
| Open Source Code | Yes | The code is available at https://github.com/neoques/Graph-Posterior-Network. |
| Open Datasets | Yes | Datasets We use three citation networks (i.e. Cora ML, Cite Seer, Pubmed) [4], two co-purchase datasets [31] (i.e. Amazon Computers, Amazon Photos), two coauthor datasets [31] (i.e. Coauthor CS and Coauthor Physics) and a large dataset OGBN Arxiv [16]. |
| Dataset Splits | Yes | We use the same train/val/test split of 5/15/80 as [32]. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments, such as CPU or GPU models, or cloud instance types. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., specific PyTorch or Python versions). |
| Experiment Setup | Yes | Besides, hyperparameters that we tune include entropy regularization weight, distance-based regularization format (whether RD or Rα), and weighting parameters (λ1, λ2), which are optimized based on the validation cross-entropy for each specific dataset. For a comprehensive overview of the hyperparameter configuration and ablation study, please refer to Appendix D. We use the Adam optimizer with a learning rate of 0.01. |