Sequential Recommendation with Probabilistic Logical Reasoning

Authors: Huanhuan Yuan, Pengpeng Zhao, Xuefeng Xian, Guanfeng Liu, Yanchi Liu, Victor S. Sheng, Lei Zhao

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, experiments on various sequential recommendation models demonstrate the effectiveness of the SR-PLR. Our code is available at https://github.com/Huanhuaneryuan/SR-PLR.
Researcher Affiliation Academia 1Soochow University 2Suzhou Vocational University 3 Macquarie University 4 Rutgers University 5 Texas Tech University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Figure 1 is a framework diagram.
Open Source Code Yes Our code is available at https://github.com/Huanhuaneryuan/SR-PLR.
Open Datasets Yes Experiments are conducted on three publicly available datasets, Amazon Sports, Toys [He and Mc Auley, 2016] and Yelp.
Dataset Splits Yes Following previous works [Kang and Mc Auley, 2018], we use the 5-core version for all datasets and adopt the leave-one-out method to split these three datasets.
Hardware Specification Yes We run all methods in Py Torch [Paszke et al., 2017] with Adam [Kingma and Ba, 2015] optimizer on an NVIDIA Geforce 3070Ti GPU
Software Dependencies No The paper mentions "Py Torch", "Adam optimizer", and being "implemented based on Rec Bole", but it does not specify exact version numbers for these software components.
Experiment Setup Yes The batch size and the dimension of embeddings d are set to 2048 and 64 in our experiments. The max sequential length for all baselines is set as 50. We train all models 50 epochs. ... SR-PLR is trained with a learning rate of 0.002. For the logic network, we set the λ in Eq. (12) as a hyperparameter and select it from [0, 1] with step 0.1. For the negative item number in Eq. (6), we choose it from 1 to 10.