Correlation-Sensitive Next-Basket Recommendation
Authors: Duc-Trong Le, Hady W. Lauw, Yuan Fang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three public real-life datasets showcase the effectiveness of our approach for the next-basket recommendation problem. We investigate the efficacy of Beacon for the next-basket recommendation task, particularly through comparing with a series of classic and state-of-the-art baselines, and conducting both quantitative and qualitative analyses on our model. |
| Researcher Affiliation | Academia | Duc-Trong Le , Hady W. Lauw and Yuan Fang School of Information Systems, Singapore Management University, Singapore {ductrong.le.2014, hadywlauw, yfang}@smu.edu.sg |
| Pseudocode | No | The information is insufficient. The paper describes the proposed framework and its components using equations and descriptive text, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The information is insufficient. The paper mentions the use of existing implementations for some baselines, but does not provide any link or explicit statement about the availability of the source code for their proposed method, Beacon. |
| Open Datasets | Yes | We conduct experiments on three publicly available real-life datasets of three different domains as follows. Ta Feng1 is a grocery shopping dataset containing transactions from Nov 2000 to Feb 2001. ... Delicious2 consists of users sequences of bookmarks. ... Foursquare3 has users chronological check-ins from Aug 2010 to Jul 2011 [Yuan et al., 2013]. ... 1https://www.kaggle.com/chiranjivdas09/ ta-feng-grocery-dataset 2https://grouplens.org/datasets/hetrec-2011 3http://www.ntu.edu.sg/home/gaocong/datacode.htm |
| Dataset Splits | Yes | To create train/validation/test sets, sequences are chronologically split into three non-overlapping periods (ttrainl, tval, ttest), i.e., (3, 0.5, 0.5) months for Ta Feng, (80, 2, 2) months for Delicious and (10, 0.5, 0.5) months for Foursquare. For the train and validation sets, we generate all subsequences of the basket sequences with more than 3 baskets. |
| Hardware Specification | Yes | For our experiments on NVIDIA P100 GPU with 16GB memory, each mini-batch takes approximately 0.1 second. |
| Software Dependencies | No | The information is insufficient. The paper mentions the use of an 'RMSProp optimizer' and 'LSTM units' and discusses recurrent units like GRU, but it does not specify any software or library names with version numbers. |
| Experiment Setup | Yes | Our model is trained in 15 epochs of batchsize 32. We use the RMSProp optimizer with the learning rate 0.001. The LSTM layer is applied with a 0.3 dropout probability. η is initialized by the mean of non-zero values in C. The model is further tuned on the validation set over the latent dimension L {8, 16, 32, 64} and recurrent hidden unit H {16, 32, 64} using a grid search. Lastly, we use α = 0.5 as the default to control the trade-off between sequential or correlative associations. |