Federated Meta-Learning for Fraudulent Credit Card Detection

Authors: Wenbo Zheng, Lan Yan, Chao Gou, Fei-Yue Wang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that the proposed approach achieves significantly higher performance compared with the other state-of-the-art approaches.
Researcher Affiliation Academia Wenbo Zheng 1,2 , Lan Yan 2,4 , Chao Gou 3 and Fei-Yue Wang 2,4 1 School of Software Engineering, Xi an Jiaotong University 2 The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences 3 School of Intelligent Systems Engineering, Sun Yat-sen University 4 School of Artificial Intelligence, University of Chinese Academy of Sciences zwb2017@stu.xjtu.edu.cn, yanlan2017@ia.ac.cn, gouchao@mail.sysu.edu.cn, feiyue.wang@ia.ac.cn
Pseudocode Yes Algorithm 1: Federated Meta-Learning Approach
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository for the described methodology.
Open Datasets Yes ECC: We sourced the first dataset from the European Credit Card (ECC) transactions provided by the ULB ML Group [Dal Pozzolo, 2015]. RA: We sourced the second dataset from the Revolution Analytics (RA)[Mohammed et al., 2018]; SD and Vesta: We sourced the third and fourth datasets from Kaggle; the third dataset is synthetic dataset (SD) 1 to evaluate the performance of fraud detection methods; the fourth dataset 2, is a challenging large-scale dataset, which comes from Vesta s real-world e-commerce transactions and contains a wide range of features from device type to product features. 1https://www.kaggle.com/ntnu-testimon/paysim1 2https://www.kaggle.com/c/ieee-fraud-detection/overview
Dataset Splits No The paper describes meta-learning specific 'support set' and 'training set' for its meta-training/testing phases, but it does not provide explicit train/validation/test dataset splits (e.g., percentages or counts) for the overall datasets used in the experiments (ECC, RA, SD, Vesta).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'Res Net-34 architecture' but does not specify any software names with version numbers for libraries, frameworks, or programming languages.
Experiment Setup Yes We employ the Res Net-34 architecture [He et al., 2016] for learning the feature exaction model. When meta-learn the transferable feature exaction, we use Adam optimizer [Kingma and Ba, 2014] with a learning rate of 0.001 and a decay for every 40 epochs. We totally train 1000 epochs and adopt the semi-hard mining strategy [Harwood et al., 2017] when the loss starts to converge.