Attention-Based Transactional Context Embedding for Next-Item Recommendation
Authors: Shoujin Wang, Liang Hu, Longbing Cao, Xiaoshui Huang, Defu Lian, Wei Liu
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The empirical study on real-world transaction datasets proves that ATEM significantly outperforms the state-of-the-art methods in terms of both accuracy and novelty. |
| Researcher Affiliation | Academia | GBDTC, FEIT, Univeristy of Technology Sydney Advanced Analytics Institute, Univeristy of Technology Sydney *Big Data Research Center, University of Electronic Science and Technology of China |
| Pseudocode | Yes | Algorithm 1 ATEM Parameter Learning Using SGD |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate our method on two real-world transaction data sets: IJCAI-15 1 and Tafang 2. 1https://tianchi.aliyun.com/datalab/dataSet.htm?id=1 2http://stackoverflow.com/questions/25014904/download-linkfor-ta-feng-grocery-dataset |
| Dataset Splits | Yes | we randomly choose 20% from the transactions happened in last 30 days as the testing set, while the remainder is for training. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Adam for gradient descent but does not specify version numbers for any programming languages or software libraries used. |
| Experiment Setup | Yes | For our ATEM model, the batch size is empirically set to 50 and the number of hidden units for item embeddings is set to 128 and 40 on IJCAI-15 and Tafang dataset respectively. We run 20 epochs to train the model. K is empirically set to 8 in our experiments. The parameter λ in exponential decay is set to 0.75 to obtain the best performance. |