Dynamic Multi-Behavior Sequence Modeling for Next Item Recommendation

Authors: Junsu Cho, Dongmin Hyun, Dong won Lim, Hyeon jae Cheon, Hyoung-iel Park, Hwanjo Yu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on Dy Mu S and Dy Mu S+ show their superiority and the significance of capturing the characteristics of multi-behavior sequences. Our extensive experiments on existing public datasets as well as our new dataset demonstrate that our proposed methods considerably outperform various state-of-the-art baselines. Also, our analyses show that the ability of Dy Mu S and Dy Mu S+ to capture the characteristics of multi-behavior sequences presented above significantly affects the recommendation performance.
Researcher Affiliation Collaboration 1Dept. of Computer Science and Engineering, POSTECH, Republic of Korea 2Institute of Artificial Intelligence, POSTECH, Republic of Korea 3GS Retail, Republic of Korea
Pseudocode No The paper includes mathematical equations and architectural diagrams (Figure 1 and 2), but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: "Moreover, we release a new, large and up-to-date dataset for multi-behavior recommendation." It only mentions releasing a dataset, not the source code for their methods.
Open Datasets Yes Our experiments were conducted on two public datasets: Taobao1 and Tmall2, and a new dataset we release: GS Retail3. [1]https://tianchi.aliyun.com/dataset/data Detail?data Id=649 [2]https://tianchi.aliyun.com/dataset/data Detail?data Id=47 [3]http://di.postech.ac.kr
Dataset Splits Yes We used the most recent interacted item on the target behavior of each user as the test item, and the second most recent one as the validation item, and trained the model with the rest of the data.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory).
Software Dependencies No The paper mentions using "Adam optimizer (Kingma and Ba 2015)" but does not provide any version numbers for Adam or any other software libraries or frameworks used in the implementation.
Experiment Setup Yes We used Adam optimizer (Kingma and Ba 2015) to optimize all models, and tuned the hyperparameters based on the validation N@10 performance: learning rate η {1e-02, 3e-03, 1e-03, 3e-04, 1e-04}, dropout rate p {0.0, 0.1, 0.2, 0.3, 0.4, 0.5}, coefficient for L2 regularization λ {0.0, 0.001, 0.01, 0.1}, embedding dimension D {16, 32, 64, 128}, batch size B {64, 128, 256, 512}. For SRSs, we tuned the length of sequence L {10, 20, 50, 100}. For Dy Mu S and Dy Mu S+, the capsule length L is tuned in {2, 4, 8, 16, 32} and the number of iterations for dynamic routing r = 2.