Graph Anomaly Detection via Multi-Scale Contrastive Learning Networks with Augmented View

Authors: Jingcan Duan, Siwei Wang, Pei Zhang, En Zhu, Jingtao Hu, Hu Jin, Yue Liu, Zhibin Dong

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The comprehensive experimental results well demonstrate the superiority of our method compared with the state-of-the-art approaches and the effectiveness of the multi-view subgraph pair contrastive strategy for the GAD task.
Researcher Affiliation Academia 1College of Computer, National University of Defense Technology, Changsha, China {jingcan duan, yueliu19990731}@163.com, {wangsiwei13, zhangpei, enzhu, hujingtao17, jinhu, dzb20}@nudt.edu.cn
Pseudocode Yes Algorithm 1: Proposed model GRADATE. Input: An undirected graph G = (V, E); Number of training epochs E; Batch size B. Output: Anomaly score function f.
Open Source Code Yes The source code is released at https://github.com/Felix DJC/GRADATE.
Open Datasets Yes The proposed method is evaluated on six benchmark datasets which details are shown in Table 2. The datasets include Citation (Yuan et al. 2021), Cora (Sen et al. 2008), Web KB (Craven et al. 1998), UAI2010 (Wang et al. 2018), UAT and EAT (Mrabah et al. 2022).
Dataset Splits No The paper does not provide specific training/validation/test dataset splits (exact percentages, sample counts, or citations to predefined splits) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using GCN and MLP components and loss functions, but it does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Model Parameters In node-subgraph and subgraph-subgraph contrasts, both GCN models have one layer and use ReLU as the activation function. The size of subgraphs in the network is set to 4. Both node and subgraph features are mapped to 64 dimensions in hidden space. Besides, we implement 400 epochs of model training and 256 rounds of anomaly score calculation. In practice, we set α to 0.9, 0.1, 0.7, 0.9, 0.7, and 0.5 on EAT, Web KB, UAT, Cora, UAI2010 and Citation. Meanwhile, we make β to 0.3, 0.7, 0.1, 0.3, 0.5, and 0.5. GRADATE tends to perform well by setting γ to 0.1 across all benchmarks. Comprehensively, we fixedly set P = 0.2 on all datasets.