A Personalized Interest-Forgetting Markov Model for Recommendations

Authors: Jun Chen, Chaokun Wang, Jianmin Wang

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments on the large scale music listening data set collected from Last.fm (Celma 2010). The experimental evaluation showed that our methods could significantly improve the accuracy of item recommendation, which verified the importance of considering interest-forgetting in recommendations.
Researcher Affiliation Academia Jun Chen, Chaokun Wang and Jianmin Wang School of Software, Tsinghua University, Beijing 100084, P.R. China.
Pseudocode No The paper describes equations and models, but it does not contain any formal pseudocode blocks or algorithms with numbered steps.
Open Source Code No The paper does not mention providing open-source code for their methodology. There are no links or statements about code availability.
Open Datasets Yes We conducted experiments on the large scale music listening data set collected from Last.fm (Celma 2010).
Dataset Splits Yes In the evaluation, we used 80% of each user s observing sequences to learn the personalized parameters in the IFMM framework as well as the parameters of the comparative models. Then, the other 20% observing sequences were used to test the performance by predicting the last song in each test observing sequence while using its rest as input.
Hardware Specification No The paper does not mention any specific hardware used for running the experiments.
Software Dependencies No The paper does not list any specific software or library dependencies with version numbers used for implementation.
Experiment Setup Yes These personalized parameters are randomly initialized in the range [0.0, 1.0] except for the φu of non-normalized experience which is randomly drawn from absolute N(0, 0.1). By using the learning method introduced in the framework, we can further perform personalized recommendations based on Eq. 4. Given a training set containing observing sequences of agents, we can iteratively update the parameters Θ using the gradient ascent method.