Can Abnormality be Detected by Graph Neural Networks?
Authors: Ziwei Chai, Siqi You, Yang Yang, Shiliang Pu, Jiarong Xu, Haoyang Cai, Weihao Jiang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we perform evaluations on the effectiveness of (AMNet) under four real-world datasets. |
| Researcher Affiliation | Collaboration | Ziwei Chai1 , Siqi You1 , Yang Yang1 , Shiliang Pu2 , Jiarong Xu3 , Haoyang Cai4 and Weihao Jiang3 1Zhejiang University 2Hikvision Research Institute 3Fudan University 4Carnegie Mellon University {zwchai, ysseven, yangya}@zju.edu.cn, {pushiliang.hri, jiangweihao5}@hikvision.com, jiarongxu@fudan.edu.cn, hcai2@andrew.cmu.edu |
| Pseudocode | No | The paper describes the model and its components mathematically and descriptively, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1The code is available at https://github.com/zjunet/AMNet |
| Open Datasets | Yes | We adopt four real-world datasets that have been used in the previous research to evaluate AMNet. Characteristics of these datasets are summarized in Table 2. Yelp [Rayana and Akoglu, 2015]: Elliptic [Weber et al., 2019]: Fin V [Yang et al., 2019]: Telecom [Yang et al., 2021]: |
| Dataset Splits | No | The paper mentions using partial node labels for semi-supervised classification and fine-tuning baselines, but it does not provide explicit details on the training, validation, and test dataset splits (e.g., percentages, counts, or specific split methodology). |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU models, CPU types, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions various GNN models used for comparison and states that baselines were initialized with parameters from their official codes, implying the use of associated software. However, it does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow, specific libraries). |
| Experiment Setup | Yes | The filter number K of AMNet is set to 2. For all methods, we report the average results of 10 independent runs. We use the output embedding Z in Eq. (6) for semi-supervised classification. Suppose the ˆY RN 2 denotes the probability of nodes belonging to the anomalous and the normal. Then ˆY can be calculated with a linear transformation and a softmax function: ˆY = softmax(ZW + b). Then we have the overall objective function by combining the classification task and constraint on attention: L = Lc + βLa where Lc represents the loss derived from node classification (e.g, cross entropy) and β 0 is the parameter that weights the constraint item La. For example, assuming two filters {g L, g H} namely with attention value {αL, αH}, the marginbased constraint on attention La can be defined as i max 0, ri αi L αi H + ζ where ζ is slack variable which controls the margin between attention values, and ri = 1 when Yi = 1, else ri = 1. |