LG-FGAD: An Effective Federated Graph Anomaly Detection Framework

Authors: Jinyu Cai, Yunhe Zhang, Jicong Fan, See-Kiong Ng

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on various types of realworld datasets prove the superiority of our method.
Researcher Affiliation Academia 1Institute of Data Science, National University of Singapore, Singapore 2Shenzhen Research Institute of Big Data, Shenzhen, China 3School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China {jinyucai, seekiong}@nus.edu.sg, zhangyhannie@gmail.com, fanjicong@cuhk.edu.cn
Pseudocode Yes The total training process of LG-FGAD is included in the Appendix A.
Open Source Code Yes Code is available at https://github.com/wownice333/LG-FGAD
Open Datasets Yes The datasets used in the experiment are from publicly available real-world graph benchmarks TUDataset2, and their construction details are illustrated in Appendix B. 2https://chrsmrrs.github.io/datasets/
Dataset Splits No The paper states using "publicly available real-world graph benchmarks TUDataset2", but does not specify the exact train/validation/test split percentages or sample counts for reproduction, nor does it refer to predefined splits with citations for these specific splits.
Hardware Specification Yes All experiments in this paper are run on the platform with NVIDIA Tesla A100 GPU and AMD EPYC 7532 CPU.
Software Dependencies No We implement LG-FGAD based on Py Torch Geometric [Fey and Lenssen, 2019] library in practice.
Experiment Setup Yes For the proposed LG-FGAD, we use Rmsprop as the optimizer and fixed learning rate α = 0.001. Besides, we adopt the optimal search grid strategy with all parameters varying in the range [1e 3, 1e2]. Regarding some fixed hyperparameters embedded in the architecture, the coefficient of the KL-Divergence regularization term is fixed as 1e 4, and the temperature of knowledge distillation is set to 1.0. The range of gradient clipping is limited to [ 0.01, 0.01]. For baseline methods, we use Adam as the optimizer, and the learning rate is set to α = 0.001. The percentile to draw the final decision boundary in Deep SVDD for AIDS is fixed at 0.3, and others are fixed at 0.001. Note that the batch size and the training epochs are set to 128 and 200 for our method and other baselines, respectively.