Spatio-Temporal Attention-Based Neural Network for Credit Card Fraud Detection
Authors: Dawei Cheng, Sheng Xiang, Chencheng Shang, Yiyi Zhang, Fangzhou Yang, Liqing Zhang362-369
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Afterward, we conduct extensive experiments on real-word fraud transaction dataset, the result shows that STAN performs better than other state-of-the-art baselines in both AUC and precision-recall curves. |
| Researcher Affiliation | Academia | Mo E Key Lab of Artificial Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China {dawei.cheng, yi95yi, lake titicaca}@sjtu.edu.cn, zhang-lq@cs.sjtu.edu.cn |
| Pseudocode | No | The paper describes the model architecture and mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | No | We collected fraud transactions from a major commercial bank, which comprises real-word credit card transaction records spanning twelve months, from Jan 1 to Dec 31, 2016. |
| Dataset Splits | Yes | Records of the first nine months were used as training data and then we predicted the fraud transactions in the following three months (Oct, Nov and Dec). We set the temporal and spatial parameters (λ1 and λ2) by cross validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions 'Adam Optimizer' but does not provide specific version numbers for software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | We set the initial learning rate to 0.001, and the batch size to 256 by default. For STAN, we employ 2 convolution layers, each of them is set to 4x4x4 convolution kernel, followed by a max-pooling layer. Two full connected layers are added on the top of 3D Conv Net, each of them consisting of 32 neurons. |