Graph Out-of-Distribution Detection Goes Neighborhood Shaping
Authors: Tianyi Bao, Qitian Wu, Zetian Jiang, Yiting Chen, Jiawei Sun, Junchi Yan
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show the competitiveness of the proposed model across multiple datasets, as evidenced by up to a 15% increase in the AUROC and a 50% decrease in the FPR compared to existing state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence & Department of Computer Science and Engineering & Mo E Lab of AI, Shanghai Jiao Tong University, Shanghai, China. |
| Pseudocode | No | The paper does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We ground our experiments on six prominent real-world datasets, frequently employed in node classification benchmarks: Twitch-Explicit (Rozemberczki & Sarkar, 2021), ogbn-Arxiv (Hu et al., 2020), Amazon-Photo (Mc Auley et al., 2015), Coauthor-CS (Sinha et al., 2015), Coauthor Physics (Shchur et al., 2018), and Cora (Sen et al., 2008). |
| Dataset Splits | Yes | For ID data, we employed the conventional random splits method (1:1:8 for training/validation/testing) as suggested by Kipf & Welling. |
| Hardware Specification | Yes | Most of the experiments run with an NVIDIA 2080Ti with 11GB memory, except for cases where the model requires larger GPU memory, for which we use an NVIDIA 3090 with 24GB memory for experiments. |
| Software Dependencies | Yes | Our implementation is based on Ubuntu 16.04, Cuda 11.0, Pytorch 1.13.0, and Pytorch Geometric 2.3.1. |
| Experiment Setup | Yes | We set the number of propagation steps k according to the size of the graph datasets, namely, k equal to 5 or 10 for most settings, and the propagation coefficient α = 0.5. For fair comparison, the GCN model with layer depth 2 and hidden size 64 is used as the backbone encoder for all the OOD discriminators. We detail the default hyper-parameters utilized across all scenarios, as delineated in Tab. 5. |