RevMan: Revenue-aware Multi-task Online Insurance Recommendation

Authors: Yu Li, Yi Zhang, Lu Gan, Gengwei Hong, Zimu Zhou, Qiang Li303-310

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive offline and online evaluations show that Rev Man outperforms the state-of-the-art recommendation systems for e-commerce.
Researcher Affiliation Collaboration 1 College of Computer Science and Technology, Jilin University 2 We Sure Inc. 3 School of Information Systems, Singapore Management University 4 Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University
Pseudocode No The paper describes the REINFORCE algorithm and its optimization equation, and includes workflow diagrams (Fig. 2, Fig. 4), but it does not contain a distinct figure or block explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets No Our dataset consists of 5.6 million online samples collected from the impression and conversion logs of a major online insurance platform in China. This indicates a proprietary or internal dataset, with no information on public availability or access.
Dataset Splits No We use the first 80% samples for training, and the remaining 20% for testing. While training and testing splits are specified, there is no explicit mention of a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU or CPU models, processor types, or memory amounts used for running its experiments. It only mentions general aspects like optimizers.
Software Dependencies No The paper mentions using 'Adam (Kingma and Ba 2015) as the optimizer' and 'ReLU activation', but it does not provide specific software dependencies with version numbers (e.g., Python version, library versions like TensorFlow or PyTorch with their respective versions).
Experiment Setup Yes During training, we use Adam (Kingma and Ba 2015) as the optimizer. For each model, the relevant hyper-parameters e.g., neurons per layer, are empirically tuned. During testing, the learning rate is set to lr = 0.002 in order to control the update step. We assign [0.5, 0.1, 0.2, 0.2] as the weight vector in the loss function Eq. (11) and set ϵ = 100 as the stop condition.