Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection
Authors: Xiangyu Dong, Xingyi Zhang, Sibo Wang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 10 real-world datasets show that RQGNN outperforms the best rival by 6.74% in Macro-F1 score and 1.44% in AUC, demonstrating the effectiveness of our framework. |
| Researcher Affiliation | Academia | Xiangyu Dong, Xingyi Zhang, Sibo Wang Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong {xydong, xyzhang, swang}@se.cuhk.edu.hk |
| Pseudocode | Yes | Algorithm 1: RQL Algorithm 2: CWGNN with RQ-pooling Algorithm 3: RQGNN |
| Open Source Code | Yes | Our code is available at https://github.com/xydong127/RQGNN. |
| Open Datasets | Yes | Datasets. We use 10 real-world datasets to investigate the performance of RQGNN, including MCF7, MOLT-4, PC-3, SW-620, NCI-H23, OVCAR-8, P388, SF-295, SN12C, and UACC257. These datasets are obtained from the TUDataset (Morris et al., 2020), consisting of various chemical compounds and their reactions to different cancer cells. |
| Dataset Splits | Yes | Experimental Settings. We randomly divide each dataset into training/validation/test sets with 70%/15%/15%, respectively. During the sampling process, we ensure that each set maintains a consistent ratio between normal and anomalous graphs. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are mentioned in the paper. |
| Experiment Setup | Yes | Experimental Settings. We set the learning rate as 0.005, the batch size as 512, the hidden dimension d = 64, the width of CWGNN-RQ q = 4, the depth of CWGNN-RQ K = 6, the dropout rate as 0.4, the hyperparameters of the loss function β = 0.999, γ = 1.5, and we use batch normalization for the final graph embeddings. |