Cross-Domain Adaptative Learning for Online Advertisement Customer Lifetime Value Prediction

Authors: Hongzu Su, Zhekai Du, Jingjing Li, Lei Zhu, Ke Lu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed framework is evaluated on five datasets collected from real historical data on the advertising platform of Tencent Games. Experimental results verify that the proposed framework is able to significantly improve the LTV prediction performance on this platform. For instance, our method can boost DCNv2 with the improvement of 13.7% in terms of AUC on dataset G2.
Researcher Affiliation Academia Hongzu Su1, Zhekai Du1, Jingjing Li1,2*, Lei Zhu3, Ke Lu1 1University of Electronic Science and Technology of China 2Institute of Electronic and Information Engineering of UESTC in Guangdong 3Shandong Normal University {hongzus, zhekaid}@std.uestc.edu.cn, lijin117@yeah.net, leizhu0608@gmail.com, kel@uestc.edu.cn
Pseudocode No The paper describes the proposed method and optimization strategy using text and mathematical equations, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code: https://github.com/TL-UESTC/CDAF.
Open Datasets No We evaluate our method on five real-world datasets constructed by randomly sampling from historical interaction data in two advertising platforms dedicated to games.
Dataset Splits Yes Datasets N train N eval N test G1-source 5,270,578 359,786 493,950 G1-target 68,042 4,005 4,005 G2-source 2,865,352 188,885 342,625 G2-target 44,691 3,542 3,542 G3-source 5,275,578 433,160 422,494 G3-target 140,269 11,888 11,888 G4-source 6,667,724 652,876 785,521 G4-target 2,530 5,933 5,933 G5-source 5,543,587 441,561 554,931 G5-target 183,413 12,041 12,041 Table 1: Statistics of the evaluation datasets. Notations N train, N eval, N test refer to the amount train samples, evaluation samples and test samples, respectively.
Hardware Specification Yes All of the feature embedding models and predictors are implemented with Tensor Flow 2.4 and trained on NVIDIA Tesla V100 GPUs.
Software Dependencies Yes All of the feature embedding models and predictors are implemented with Tensor Flow 2.4 and trained on NVIDIA Tesla V100 GPUs.
Experiment Setup Yes We use Adam (Kingma and Ba 2015) optimizer with β1 = 0.9 and β2 = 0.999 to optimize all the models. All of the hyper-parameters in this work are selected with validation sets.