Online Reputation Fraud Campaign Detection in User Ratings

Authors: Chang Xu, Jie Zhang, Zhu Sun

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical analysis on two real-world datasets validates the effectiveness and efficiency of the proposed framework.
Researcher Affiliation Academia Chang Xu, Jie Zhang, Zhu Sun School of Computer Science and Engineering, Nanyang Technological University, Singapore xuch0007@e.ntu.edu.sg, zhangj@ntu.edu.sg, sunzhu@ntu.edu.sg
Pseudocode Yes Algorithm 1: Incremental RFC Detection Flow
Open Source Code No The paper does not include any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Our experiments are conducted on two real-world datasets. Restaurant Reviews on Yelp (Yelp Zip): This dataset was used in [Rayana and Akoglu, 2015]... Product Reviews on Amazon (Amazon Cn): This dataset was created by [Xu et al., 2013].
Dataset Splits No The paper mentions 'training data' but does not provide specific percentages, sample counts, or a detailed methodology for dataset splits (e.g., for train, validation, or test sets).
Hardware Specification Yes The experiments are conducted on a machine with a single-CPU, 3.20Ghz and 16G memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names, frameworks, or solvers with their versions) used to replicate the experiment.
Experiment Setup Yes For FRAUDSCAN and its variants, four parameters α1, α2, α3, and k need to be tuned... Here, the optimal settings (α1, α2, α3, k) are used, i.e., (0.1, 0.1, 0.01, 20) for Amazon Cn and (0.01, 0.1, 0.01, 30) for Yelp Zip.