The Causal Impact of Credit Lines on Spending Distributions
Authors: Yijun Li, Cheuk Hang Leung, Xiangqian Sun, Chaoqun Wang, Yiyan Huang, Xing Yan, Qi Wu, Dongdong Wang, Zhixiang Huang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To assess the effectiveness of our methods, we conduct a simulation study. The results reveal that all three estimators are effective, especially for the DML estimator. We finally apply our approach to investigate the causal impact of credit lines on spending distributions based on a real-world dataset collected from a large e-commerce platform. |
| Researcher Affiliation | Collaboration | 1 School of Data Science, City University of Hong Kong 2 Department of Financial and Actuarial Mathematics, Xi an Jiaotong Liverpool University 3 Institute of Statistics and Big Data, Renmin University of China 4 JD Digits |
| Pseudocode | Yes | Algorithm 1: Computations of ˆ di;w |
| Open Source Code | Yes | Our code is available at https://github.com/lyjsilence/The Causal Impact-of-Credit-Lines-on-Spending-Distributions. |
| Open Datasets | No | We finally apply our approach to investigate the causal impact of credit lines on spending distributions based on a real-world dataset collected from a large e-commerce platform. ... We collect data from 4,043 platform users. ... The data comprises various variables... Appendix F displays a detailed statistical description. |
| Dataset Splits | Yes | We split the N units into K disjoint groups. Let the kth group be Dk of size Nk and form D k. ... 5-fold cross-fitting, i.e., 4, 000 instances are used to train, and 1, 000 instances are used to obtain the three estimators (i.e., DR, IPW, and DML estimator). |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts) are mentioned for the experimental setup. |
| Software Dependencies | No | The paper mentions 'random forest' and 'MLP' but does not provide specific version numbers for any software, libraries, or frameworks. |
| Experiment Setup | Yes | The classification and functional regression models are trained separately. 5, 000 generated instances are trained using 5-fold cross-fitting, i.e., 4, 000 instances are used to train, and 1, 000 instances are used to obtain the three estimators (i.e., DR, IPW, and DML estimator). At last, we average the obtained estimators from the 5 folds as the final results. In the classification task, we use the same classifier (i.e., random forest) to compute IPW for all the estimators. The training details are given in Appendix E. |