Towards Self-Interpretable Graph-Level Anomaly Detection
Authors: Yixin Liu, Kaize Ding, Qinghua Lu, Fuyi Li, Leo Yu Zhang, Shirui Pan
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 16 datasets demonstrate the anomaly detection capability and self-interpretability of SIGNET. |
| Researcher Affiliation | Academia | 1Monash University, 2Northwestern University, 3Data61, CSIRO, 4Northwest A&F University, 5The University of Adelaide, 6Griffith University |
| Pseudocode | Yes | More discussion about methodology, including the pseudo-code algorithm of SIGNET, the comparison between SIGNET and existing method, and the complexity analysis of SIGNET, is illustrated in Appendix E. |
| Open Source Code | Yes | Our code is available at https://github.com/yixinliu233/SIGNET. |
| Open Datasets | Yes | We also verify the anomaly detection performance of SIGNET on 10 TU datasets [58], following the setting in [4]. ... MNIST-0 and MNIST-1 are two GLAD datasets derived from MNIST-75sp superpixel dataset [59]. ... MUTAG is a molecular property prediction dataset [61]. |
| Dataset Splits | No | Given the training set Gtr that contains a number of normal graphs, we aim at learning an explainable GLAD model f : G (R, G) that is able to predict the abnormality of a graph and provide corresponding explanations. In specific, given a graph Gi from the test set Gte with normal and abnormal graphs... It mentions "training set" and "test set" but does not explicitly provide validation split percentages or counts. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or detailed computer specifications used for running experiments. |
| Software Dependencies | No | In SIGNET, we use GIN [2] and Hyper-Conv [30] as the GNN and HGNN encoders. The paper names software components but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For all methods, we perform 5 random runs and report the average performance. We use Adam optimizer with learning rate 0.001 and weight decay 0.0001. We train 100 epochs with early stopping (patience 20). We use a batch size of 128. For SIGNET, the GNN encoder (GIN) and HGNN encoder (Hyper-Conv) are 2-layer, and the hidden dimension is 64. The bottleneck subgraph extractor (MLP) has 2 layers. We set the temperature parameter τ = 0.5. The number of negative samples is 128 for Info-NCE. The trade-off parameter β = 0.001. |