User Retention: A Causal Approach with Triple Task Modeling

Authors: Yang Zhang, Dong Wang, Qiang Li, Yue Shen, Ziqi Liu, Xiaodong Zeng, Zhiqiang Zhang, Jinjie Gu, Derek F. Wong

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments on both offline and online environments from different scenarios demonstrate the superiority of UR-IPW over previous methods. We conduct extensive experiments on both offline and online environments.
Researcher Affiliation Collaboration Yang Zhang1,2 , Dong Wang1 , Qiang Li1 , Yue Shen1 , Ziqi Liu1 , Xiaodong Zeng1 , Zhiqiang Zhang1 , Jinjie Gu1 and Derek F. Wong3 1Ant Group, Hangzhou, China 2Beihang University, Beijing, China 3University of Macau, Macau, China
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about releasing source code or links to a code repository.
Open Datasets No Our two production datasets are collected from Alipay s recommender system. The release of related datasets requires strict approval, we are going through the relevant approval procedure; The data set is only used for academic research, it does not represent any real business situation.
Dataset Splits No For each dataset, we split the first 4 days in the time sequence to be training set while the rest to be test set. The paper specifies training and test sets but does not explicitly mention a separate validation set split.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models.
Software Dependencies Yes All the deep neural network-based models are implemented in Tensor Flow v1.13 using Adam optimizer.
Experiment Setup Yes The learning rate is set as 0.0005 and the mini-batch size is set as 1024. Cross-entropy loss function is used for each prediction task in all models. There is 5 layers in the MLP, where the dimension of each layer is set as 512 256 128 32 2.