Question-Driven Purchasing Propensity Analysis for Recommendation

Authors: Long Chen, Ziyu Guan, Qibin Xu, Qiong Zhang, Huan Sun, Guangyue Lu, Deng Cai35-42

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate QDANN in three domains of Taobao. The results show the efficacy of our method and its superiority over baseline methods.
Researcher Affiliation Collaboration Long Chen,1 Xi an University of Posts and Telecommunications, Xi an, China Ziyu Guan,2 Northwest University, Xi an, China Qibin Xu,3 Zhejiang University, Hang Zhou, China Qiong Zhang,4 Alibaba Group, Hang Zhou, China Huan Sun,5 Ohio State University, Columbus, USA
Pseudocode No The paper describes its model architecture and components in text and diagrams, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the QDANN methodology is open-source or publicly available.
Open Datasets No In this section, we evaluate our method on a dataset collected from Taobao.com.
Dataset Splits Yes All the preprocessed positive tuples are split into training set (70%), validation set (10%) and test set (20%).
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions 'Adam (Kingma and Ba 2014)' as the optimizer and 'Chinese-Word-Vectors' but does not provide specific version numbers for these or any other software dependencies like libraries or frameworks.
Experiment Setup Yes We use the Adam (Kingma and Ba 2014) for training. We set the learning rate to 1e-6. The first and second momentum coefficients are set to 0.9 and 0.999 respectively. We follow the empirical conclusion in (Guan et al. 2016) to set λ in Eq. (18) to 0.5. Both the length of the hidden state vectors of GRUs (u) and the hyperparameter (k) are both set to 64 according to the parameter study. The mini-batch size for SGD is set to 32.