Don't Ignore Alienation and Marginalization: Correlating Fraud Detection

Authors: Yilong Zang, Ruimin Hu, Zheng Wang, Danni Xu, Jia Wu, Dengshi Li, Junhang Wu, Lingfei Ren

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two public datasets demonstrate that COFRAUD achieves significant improvements over state-of-the-art methods.
Researcher Affiliation Academia Yilong Zang1 , Ruimin Hu1 , Zheng Wang1 , Danni Xu2 , Jia Wu3 , Dengshi Li4 , Junhang Wu1 , Lingfei Ren1 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University 2School of Computing, National University of Singapore 3School of Computing, Macquarie University 4School of Artificial Intelligence, Jianghan University
Pseudocode No The paper describes the model architecture and modules in text and diagrams but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes Our experiments are conducted on two real-world datasets, Amazon [Mc Auley and Leskovec, 2013] and Yelp [Rayana and Akoglu, 2015].
Dataset Splits Yes We set the training ratio as 40% and 10% to compare the performance of different methods. For the remaining part of the samples, we divide the validation set and test set in the ratio of 1:2.
Hardware Specification No The paper states: “And we conduct all the experiments on the GPU resource of Google Colaboratory [Carneiro et al., 2018],” but does not provide specific hardware details such as exact GPU/CPU models or memory amounts.
Software Dependencies No The paper states: “We implement our method by Pytorch and DGL [Wang, 2019],” but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We set the training ratio as 40% and 10% to compare the performance of different methods. For the remaining part of the samples, we divide the validation set and test set in the ratio of 1:2... We set the dimensions of all hidden units to 16, 32, 64, and 128, and module layers to 1 and 2... We set the hidden units to 32 and the batch size to 1024 for all algorithms on the two datasets.