Interpretable Multimodal Learning for Intelligent Regulation in Online Payment Systems
Authors: Shuoyao Wang, Diwei Zhu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | With the real datasets from the largest online payment system, We Chat Pay of Tencent, we conduct experiments to validate the practical application value of CIAN, where our method outperforms the state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1College of Electronic and Information Engineering, Shenzhen University, China 2Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong w.shuoy@gmail.com, zd115@ie.cuhk.edu.hk Work done while Shuoyao was a senior researcher with Financial Technology Group, Tencent, China. |
| Pseudocode | No | The paper describes the system architecture and components but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide open-source code for the CIAN methodology described. It only references a link for the pre-trained RoBERTa model used: 'https://github.com/brightmart/roberta zh'. |
| Open Datasets | No | For a merchant, its transaction flow records make up a basic data identification, and the remarks and comments are regarded as the description of this transaction flow. From the real-world datasets in We Chat Pay, we extract two subdatasets for the evaluation. |
| Dataset Splits | Yes | In the first sub-dataset, namely 31-D dataset, we collect 50,000 merchants from 2019/07/01 to 2019/07/31 for training, 1,000 merchants from 2019/08/01 to 2019/08/31 for validating, and 1,000 merchants from 2019/07/01 to 2019/07/31 for testing, from We Chat Pay system. In the second dataset, namely 7-D dataset, we collect 50,000 merchants from 2019/07/01 to 2019/07/07 for training, 1,000 merchants from 2019/08/01 to 2019/08/07 for validating, and 1,000 merchants from 2019/07/01 to 2019/07/07 for testing. Through the hardest negative sampler [Hermans et al., 2017], there are total 500,0000 training pairs, 10,000 pairs for validating, and 10,000 for testing. |
| Hardware Specification | Yes | All of our experiments are conducted on a machine with an Intel Xeon E5-2630 CPU, two NVIDIA GTX 1080 Ti GPUs, and 64GB RAM. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not specify a version number. It also mentions 'RoBERTa' but this is a model, not a software dependency with a version. |
| Experiment Setup | Yes | The model parameters are initialized with kaiming normal initializer and Adam optimization algorithm is used to train the overall network. Moreover, we set the batch size to 256, the initial learning rate to 0.01 and the regularizer parameter as 0.01 to prevent over-fitting. |