Robust Spatio-Temporal Purchase Prediction via Deep Meta Learning

Authors: Huiling Qin, Songyu Ke, Xiaodu Yang, Haoran Xu, Xianyuan Zhan, Yu Zheng4312-4319

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the meta-learning generalization ability of STMP. STMP outperforms baselines in all cases, which shows the effectiveness of our model.
Researcher Affiliation Collaboration 1 School of Computer Science and Technology, Xidian University, Xi an, China 2 JD Intelligent Cities Research, Beijing, China 3 JD i City, JD Technology, Beijing, China 4 Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China 5 Artificial Intelligence Institute, Southwest Jiaotong University, China
Pseudocode Yes Algorithm 1 ST-Training
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code.
Open Datasets No The paper states, "We use a large-high-quality online purchase dataset from JD.com to evaluate our model." However, it does not provide concrete access information (e.g., a link, DOI, or explicit statement of public availability) for this dataset. JD.com is a commercial entity, implying the dataset is proprietary.
Dataset Splits No The paper mentions "The training Dr and testing Dr data are distinguished for each task" and describes "1-shot to 4-shot experiments using different numbers of years data (2015 2018)" which implies some data partitioning. However, it does not provide specific details on the train/validation/test splits (e.g., percentages, sample counts, or predefined split citations) needed to reproduce the experiments for the entire dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers (e.g., Python, deep learning frameworks, or libraries) used in the experiments.
Experiment Setup No The paper states, "To make a fair comparison, we present the best performance of each method under fine-tuned parameter settings in Table 1," implying hyperparameter tuning was performed. However, it does not explicitly list the specific hyperparameter values or other detailed training configurations within the main text.