Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

SAFE: A Neural Survival Analysis Model for Fraud Early Detection

Authors: Panpan Zheng, Shuhan Yuan, Xintao Wu1278-1285

AAAI 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two real world datasets demonstrate that SAFE outperforms both the survival analysis model and recurrent neural network model alone as well as state-of-the-art fraud early detection approaches.
Researcher Affiliation Academia Panpan Zheng, Shuhan Yuan, Xintao Wu University of Arkansas, Fayetteville, AR, USA EMAIL
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Repeatability. Our software together with the datasets are available at https://github.com/Panpan Zheng/SAFE.
Open Datasets Yes We conduct our experiments on two real-world datasets: Twitter... We adopt the UMDWikipedia dataset (Kumar, Spezzano, and Subrahmanian 2015)... Our software together with the datasets are available at https://github.com/Panpan Zheng/SAFE.
Dataset Splits Yes We randomly divide the dataset into a training set, a validation set, and a testing set with the ratio (7:1:2).
Hardware Specification No The paper does not provide specific details about the hardware used for experiments.
Software Dependencies No The paper mentions using Adam for optimization and Lifelines for CPH implementation, but no specific version numbers for any software dependencies are provided.
Experiment Setup Yes SAFE is trained by back-propagation via Adam (Kingma and Ba 2015) with a batch size of 16 and a learning rate 10 3. The dimension of the GRU hidden unit is 32.