Life-Stage Modeling by Customer-Manifold Embedding

Authors: Jing-Wen Yang, Yang Yu, Xiao-Peng Zhang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the proposed method, we conduct experiments in a real-world data. Experimental results show that the proposed method can achieve significantly better performance than baseline recommendation approaches.
Researcher Affiliation Collaboration Jing-Wen Yang , Yang Yu , Xiao-Peng Zhang National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, China Tencent Inc., China yangjw@lamda.nju.edu.cn, yuy@lamda.nju.edu.cn, xpzhang@tencent.com
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any information about open-source code for the described methodology.
Open Datasets No The data used in our experiment is provided by Tencent Inc., which consists of customer online shopping history from 7/21/2015 to 2/1/2016 from a real B2C e-commerce system, which servers millions of people everyday.
Dataset Splits No The paper states 'The data before 1/22/2016 is used for training and the rest is used for testing.' but does not explicitly mention a validation set or specific split percentages/counts for training, validation, and testing.
Hardware Specification No The paper states 'on the same machine' when discussing time efficiency, but does not provide any specific hardware details such as GPU models, CPU models, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as library names or frameworks (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x).
Experiment Setup Yes In the evolutionary similarity computing, the λ is 0.5 and dist equals to 0.5 if two items belongs to the same category, otherwise equals to 5. We adopt only 1 LSTM layer and 2 forward layers. For example, 100-65-2000 denotes that we train a network with 100 hidden units in mini-batch size of 65 for 2000 epochs.