Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Decoupled Graph Energy-based Model for Node Out-of-Distribution Detection on Heterophilic Graphs
Authors: Yuhan Chen, Yihong Luo, Yifan Song, Pengwen Dai, Jing Tang, Xiaochun Cao
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments validate that De GEM, without OOD exposure during training, surpasses previous state-of-the-art methods, achieving an average AUROC improvement of 6.71% on homophilic graphs and 20.29% on heterophilic graphs, and even outperform methods trained with OOD exposure. Our code is available at: https://github.com/draym28/De GEM. |
| Researcher Affiliation | Academia | Yuhan Chen1 & Yihong Luo2 , Yifan Song3, Pengwen Dai4, Jing Tang3,2 , Xiaochun Cao4 1 The School of Computer Science and Engineering, Sun Yat-sen University 2 The Hong Kong University of Science and Technology 3 The Hong Kong University of Science and Technology (Guangzhou) 4 The School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University |
| Pseudocode | Yes | Algorithm 1 Training algorithm of De GEM. |
| Open Source Code | Yes | Our code is available at: https://github.com/draym28/De GEM. |
| Open Datasets | Yes | We evaluate De GEM on seven benchmark datasets for node classification tasks (Yang et al., 2016; Shchur et al., 2018; Rozemberczki et al., 2021; Wang et al., 2020; Pei et al., 2020), including four homophily datasets (Cora, Amazon-Photo, Twitch, and ogbn-Arxiv) and three heterophily datasets (Chameleon, Actor, and Cornell). |
| Dataset Splits | Yes | We split the ID dataset as 10%/10%/80% (train/valid/test), and use all the nodes in OOD dataset for evaluation. |
| Hardware Specification | Yes | We implement our model by Py Torch and conduct experiments on 24GB RTX-3090ti. |
| Software Dependencies | No | We implement our model by Py Torch and conduct experiments on 24GB RTX-3090ti. (No specific version number for PyTorch or other libraries is provided in the main text.) |
| Experiment Setup | Yes | Epoch number E = 200, MH layer number L = 5, hidden dimension d = 512, MCMC steps K = 20. We use Optuna (Akiba et al., 2019) to search hyper-parameters for our proposed model and baselines (see Appendix E.3 for detailed search space). |