Intention2Basket: A Neural Intention-driven Approach for Dynamic Next-basket Planning
Authors: Shoujin Wang, Liang Hu, Yan Wang, Quan Z. Sheng, Mehmet Orgun, Longbing Cao
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real-world datasets show the superiority of Int2Ba over the state-of-the-art approaches.5 Experiments and Evaluation |
| Researcher Affiliation | Academia | 1Department of Computing, Macquarie University 2Advanced Analytics Institute, University of Technology Sydney 3University of Shanghai for Science and Technology |
| Pseudocode | No | The paper describes the model's components and processes mathematically but does not include structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Two real-world transaction datasets commonly used to test the performance of next-basket prediction [Guidotti and et al., 2018; Le et al., 2019] are used for the experiments: (1) Tmall1 released by IJCAI-15 competition. It records the shopping baskets purchased by each anonymous user on Tmall.com (The Chinese version of Amazon) in six months. The purchase date of each basket is given while no timestamp for those items inside it; and (2) Tafeng2 released on Kaggle. It contains the transaction data of a Chinese grocery store produced in four months, whose format is similar to Tmall. 1https://tianchi.aliyun.com/dataset/dataDetail?dataId=42 2https://www.kaggle.com/chiranjivdas09/ta-feng-grocerydataset |
| Dataset Splits | No | Finally, we randomly select 20%, 30% and 40% of the instances whose target basket happens in the last 30 days to form three test sets, while the reminder for the corresponding training set respectively. In our model... is set to 3 and 100 respectively by tuning on the validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running its experiments. |
| Software Dependencies | Yes | Our model is implemented using TensorFlow 1.11 and its parameters are learned based on a mini-batch learning procedure. Adam [Kingma and Ba, 2015] is used for gradient learning. |
| Experiment Setup | Yes | The initial learning rate is empirically set to 0.001 and the batch size is set to 50. In our model, the dimensions of embeddings and intention states are empirically set to 100, while the number of channels m and the candidate number H in DBP is set to 3 and 100 respectively by tuning on the validation set. |