Dual-discriminative Graph Neural Network for Imbalanced Graph-level Anomaly Detection

Authors: GE ZHANG, Zhenyu Yang, Jia Wu, Jian Yang, Shan Xue, Hao Peng, Jianlin Su, Chuan Zhou, Quan Z. Sheng, Leman Akoglu, Charu Aggarwal

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate i GAD on four real-world graph datasets. Extensive experiments demonstrate the superiority of i GAD on the graph-level anomaly detection task.In this section, we conduct a series of experiments to study the performance of i GAD on the graph-level anomaly detection task.
Researcher Affiliation Collaboration 1Macquarie University 2University of Wollongong 3Beihang University 4Zhuiyi Technology 5AMSS, Chinese Academy of Sciences 6Carnegie Mellon University 7IBM T. J. Watson Research Center
Pseudocode Yes The pseudocode of i GAD is included in Algorithm 1 in Appendix A.1.
Open Source Code Yes The code is available at https://github.com/graph-level-anomalies/i GAD.
Open Datasets Yes SW-610, MOLT-4, PC-3, and MCF-7 are four real-world graph datasets. These datasets are collected from Pub Chem2, which records a tremendous amount of chemical compounds and their anti-cancer activity testing results ( active or inactive ) on different types of cancer cell lines.2https://pubchem.ncbi.nlm.nih.gov
Dataset Splits Yes We report experimental results under 5-fold cross-validation.Given the training set, we initialize anomalous substructures and other parameters (Line 1 in Algorithm 1) and calculate the proportion of normal and anomalous graphs (Line 2). For each graph, i GAD learns its graph representation (Lines 3 to 15). Based on the graph representation, a MLP equipped with the PMI-based loss function gives predictions to graphs (Lines 16 and 17).
Hardware Specification No The information is insufficient. The paper does not specify any particular hardware components (e.g., specific GPU or CPU models, memory details, or cloud instance types) used for running the experiments.
Software Dependencies Yes The adjacency matrix Ai is sparse, and we compute (Ai)l by the sparse matrix multiplication in Py Torch [34]. [34] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Neur IPS, 32, 2019.
Experiment Setup Yes Information about parameter setting and algorithm implementation can be found in Appendix B.1. The maximum number of hops K in anomalous attribute-aware graph convolution is set as 2. The number of anomalous substructures M is set as 5, and the size of each anomalous substructure n is set as 8. The maximum random walk length L is set as 5.