Risk Guarantee Prediction in Networked-Loans

Authors: Dawei Cheng, Xiaoyang Wang, Ying Zhang, Liqing Zhang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental result shows that DGANN could significantly improve the risk prediction accuracy in both the precision and recall compared with state-of-the-art baselines. We also conduct empirical studies to uncover the risk guarantee patterns from the learned attentional network features.
Researcher Affiliation Academia Dawei Cheng1 , Xiaoyang Wang2 , Ying Zhang3 and Liqing Zhang1 1Mo E Key Lab of Artificial Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University, China 2Zhejiang Gongshang University, China 3University of Technology Sydney, Australia dawei.cheng@sjtu.edu.cn, xiaoyangw@zjgsu.edu.cn, ying.zhang@uts.edu.au, zhang-lq@cs.sjtu.edu.cn
Pseudocode No The paper describes the model architecture and algorithms in text and mathematical formulas but does not provide a dedicated pseudocode block or algorithm figure.
Open Source Code No The paper does not provide any specific links or statements about the availability of its source code.
Open Datasets No The paper states it uses a "real-world dataset from a major financial institution in East Asia" but does not provide access information (link, DOI, formal citation) or state that it is publicly available.
Dataset Splits Yes In this section, we report out the results of risk guarantee prediction, in which the records of the year 2013 are employed as the training data. We then predict the risk guarantees in a recurrent manner for the next three years.
Hardware Specification No The paper does not specify any hardware details like GPU/CPU models, memory, or cloud instances used for running experiments.
Software Dependencies No The paper mentions applying the Adam algorithm and implementing pred(ui) with a neural network, but it does not specify software versions (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We set the initial learning rate to 0.001, and the batch size to 128 by default. We set embedding dimension to 128, λ are determined by the data distribution, which is set to 16. The parameters of baseline methods are initialized by their recommended settings.