SMINet: State-Aware Multi-Aspect Interests Representation Network for Cold-Start Users Recommendation

Authors: Wanjie Tao, Yu Li, Liangyue Li, Zulong Chen, Hong Wen, Peilin Chen, Tingting Liang, Quan Lu8476-8484

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments conducted both offline and online demonstrate the superior performance of the proposed model at user representation, especially for cold-start users, compared with state-of-the-art methods.
Researcher Affiliation Collaboration 1 Alibaba Group,Hangzhou,China 2 Hangzhou Dianzi University,Hangzhou,China
Pseudocode No The paper describes its methods using prose and mathematical equations, but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The details for preprocessing the datasets along with the data and code are released at https://github.com/wanjietao/Fliggy SMINet-AAAI2022
Open Datasets Yes We use two datasets1. (1) Fliggy: our proprietary dataset extracted from user s behavior logs at Fliggy, one of the largest OTP in China. ... The details for preprocessing the datasets along with the data and code are released at https://github.com/wanjietao/Fliggy SMINet-AAAI2022. (2) Foursquare: a public dataset that contains check-in data of a user at a particular location at a specific timestamp, along with attribute information of users and locations.
Dataset Splits No The dataset is further split into training set, test set and validation set. The paper mentions the existence of a validation set but does not provide specific details on the split (e.g., percentages or counts).
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No The paper describes the model architecture and loss function but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations.