Radar: Residual Analysis for Anomaly Detection in Attributed Networks

Authors: Jundong Li, Harsh Dani, Xia Hu, Huan Liu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real datasets show the effectiveness and generality of the proposed framework.
Researcher Affiliation Academia Computer Science and Engineering, Arizona State University, USA Department of Computer Science and Engineering, Texas A&M University, USA
Pseudocode Yes Algorithm 1 Anomaly detection in attributed networks via residual analysis (radar)
Open Source Code No The paper does not provide any concrete access information (link or explicit statement) to the source code for the proposed 'radar' framework.
Open Datasets Yes We use three real-world attributed network datasets for the evaluation of the proposed anomaly detection method. Among them, Disney dataset and Books dataset2 come from Amazon co-purchase networks. Disney is a co-purchase network of movies, the attributes include prices, ratings, number of reviews, etc. The ground truth (anomalies) are manually labeled by high school students. The second dataset, Books, is a co-purchase network of books, it has similar attributes as Disney dataset. The ground truth (anomalies) are obtained by amazonfail tag information. Enron3 is an email network dataset, spam messages are taken as ground truth.
Dataset Splits No The paper describes using a grid-search strategy for parameter tuning but does not explicitly provide details about train/validation/test dataset splits, sample counts, or cross-validation methodology.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes The proposed radar framework has three different regularization parameters, for a fair comparison, we tune these parameters by a grid-search strategy from {10 3, 10 2, ..., 102, 103}. Details about the effects of these parameters will be investigated later. The parameter settings of these baseline methods follow the settings of [S anchez et al., 2013].