Spatial-Temporal Perceiving: Deciphering User Hierarchical Intent in Session-Based Recommendation
Authors: Xiao Wang, Tingting Dai, Qiao Liu, Shuang Liang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three real-world datasets exhibit that Hear Int achieves state-of-the-art performance. |
| Researcher Affiliation | Academia | Xiao Wang , Tingting Dai , Qiao Liu and Shuang Liang University of Electronic Science and Technology of China wangxiao16@std.uestc.edu.cn, ttdai 18@outlook.com, qliu@uestc.edu.cn, shuangliang@uestc.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and datasets are publicly available on Git Hub 1. 1https://github.com/jarviswww/Code4Hear Int |
| Open Datasets | Yes | We evaluate the proposed model on three benchmark datasets, namely Tmall 2, Retail Rocket3,Diginetica4. ... 2https://tianchi.aliyun.com/dataset/dataDetail?dataId=42 3https://www.kaggle.com/retailrocket/ecommerce-dataset 4https://competitions.codalab.org/competitions/11161 |
| Dataset Splits | No | The paper describes the training and test sets but does not specify a separate validation split, its size, or the method for creating it. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., Python, PyTorch versions, or specific libraries). |
| Experiment Setup | Yes | For the general setting, the embedding size is 100, the batch size is 100. For Hear Int, the initial learning rate is 0.001, which will decay by 0.6 after every 1 epoch. We employ the k-means as the clustering algorithm and set the number of clusters to 100. The threshold α of cosine similarity is 0 in three datasets. Mask probability β is 0.4 for Tmall , while 0.1 is for Diginetica and Retailrocket. The θ The best number of layers in both attention and GNN encoders is 1,2,2 for Tmall, Retailrocket, and Diginetica, respectively. |