AddGraph: Anomaly Detection in Dynamic Graph Using Attention-based Temporal GCN
Authors: Li Zheng, Zhenpeng Li, Jian Li, Zhao Li, Jun Gao
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on real-world datasets, and illustrate that Add Graph can outperform the state-of-the-art competitors in anomaly detection significantly. |
| Researcher Affiliation | Collaboration | Li Zheng1,2 , Zhenpeng Li3 , Jian Li3 , Zhao Li3 and Jun Gao1,2 1The Key Laboratory of High Confidence Software Technologies, Ministry of Education, China 2School of EECS, Peking University, China 3Alibaba Group, China {greezheng, gaojun}@pku.edu.cn, {zhen.lzp,zeshan.lj,lizhao.lz}@alibaba-inc.com |
| Pseudocode | Yes | Algorithm 1 Add Graph algorithm |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for the methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | We evaluate our framework on two datasets and the details of these two datasets are shown in Table 2. UCI Message is a directed network containing messages among an online community at University of California, Irvine. Digg is a response network of Digg, a social news site. We need to manually build the required datasets because the ground-truth for the test phase is difficult to obtain [Akoglu et al., 2015], and we follow the approach used in [Yu et al., 2018] to inject anomalous edges into two datasets. |
| Dataset Splits | No | We divide the dataset into two parts, the first 50% as the training data and the latter 50% as the test data. No explicit mention of a separate validation split is provided. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow versions or other libraries). |
| Experiment Setup | Yes | The number of GCN layers is 2. The weight decay λ for regularization is 5e-7. The learning rate lr is 0.002. The dropout rate is 0.2. For UCI Message dataset, the size of dimension is 500 for hidden state. The margin γ is set to 0.5. The parameters β and µ is set to 1.0 and 0.3 respectively. For Digg dataset, the size of dimension is 200 for hidden state. The margin γ is set to 0.7. The parameters β and µ is set to 3.0 and 0.5 respectively. In training phase, we use snapshots of training data to build an initial model. In test phase, we maintain the model incrementally as each snapshot at timestamp t arrives. The number of GCN layers is set to 3. The learning rate lr is 0.001. For UCI Message dataset, the size of dimension is 100 for hidden states. For Digg dataset, the size of dimension is 50 for hidden states. The margin γ is set to 0.6. |