Inductive Anomaly Detection on Attributed Networks
Authors: Kaize Ding, Jundong Li, Nitin Agarwal, Huan Liu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various attributed networks demonstrate the efficacy of the proposed approach. |
| Researcher Affiliation | Academia | Kaize Ding1 , Jundong Li2,3 , Nitin Agarwal4 and Huan Liu1 1Computer Science and Engineering, Arizona State University, USA 2Electrical and Computer Engineering, University of Virginia, USA 3Computer Science & School of Data Science, University of Virginia, USA 4Information Science, University of Arkansas Little Rock, USA kaize.ding@asu.edu, jundong@virginia.edu, nxagarwal@ualr.edu, huan.liu@asu.edu |
| Pseudocode | Yes | Algorithm 1: The training process of AEGIS |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | In the experiments, we employ three public real-world attributed network datasets (Blog Catalog [Wang et al., 2010], Flickr [Tang and Liu, 2009], and ACM [Tang et al., 2008]) for performance comparison. |
| Dataset Splits | Yes | we sample another 40% data to construct the newly observed attributed (sub)network G for testing and the remaining 10% data is for the validation purpose. |
| Hardware Specification | No | The paper does not specify any hardware details such as CPU/GPU models, memory, or cloud computing instances used for experiments. |
| Software Dependencies | No | The paper mentions software components like ELU, ReLU, and Adam optimizer but does not provide specific version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | We set the learning rate of the reconstruction loss to 0.005. The training epoch of GDN-AE is 200, while the training epoch of Ano-GAN is 50. We set the parameter K to 3 (Blog Caltalog), 2 (Flickr), 3 (ACM). In addition, the number of samples P is set to 0.05 n for each dataset. |