Boosting Graph Anomaly Detection with Adaptive Message Passing

Authors: Jingyan Chen, Guanghui Zhu, Chunfeng Yuan, Yihua Huang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on nine benchmark datasets, including two large-scale OGB datasets, demonstrate that GADAM surpasses existing state-of-the-art methods in terms of both effectiveness and efficiency.
Researcher Affiliation Academia Jingyan Chen, Guanghui Zhu , Chunfeng Yuan, Yihua Huang State Key Laboratory for Novel Software Technology, Nanjing University chenjy@smail.nju.edu.cn, {zgh, cfyuan, yhuang}@nju.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes 1GADAM is available at https://github.com/Pasa-Lab/GADAM
Open Datasets Yes Extensive experiments conducted on nine benchmark datasets, including seven datasets with injected synthetic anomalies and two datasets with organic anomalies, demonstrate that our method achieves state-of-the-art performance compared with a range of baselines. Moreover, GADAM shows superiority in both runtime and GPU overhead. and ogbn-Arxiv (Hu et al., 2020), Books (S anchez et al., 2013) and Reddit (Kumar et al., 2019; Wang et al., 2021).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning for validation.
Hardware Specification Yes All experiments are performed on 1 Tesla V100 GPU with 32GB memory.
Software Dependencies Yes Libraries that our method relies on are as follows: Python=3.7, CUDA=11.6, torch=1.12, dgl=0.9.0, networkx=2.6.3. For unsupervised baselines, we use benchmark implementations with pygod=0.3.0.
Experiment Setup Yes The proposed model consists of two training stages: local inconsistency mining and global consistency discernment. The layer of MLP in each module is set to 1. The number of epochs, batch size, and learning rate for these two modules on each dataset are shown in Tab. 5. and Table 5: More details of hyperparameter settings, 0 denotes full batch. Local denotes local inconsistency mining and Global denotes global consistency discernment.