Rating-Boosted Latent Topics: Understanding Users and Items with Ratings and Reviews
Authors: Yunzhi Tan, Min Zhang, Yiqun Liu, Shaoping Ma
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on 26 real-world datasets from Amazon demonstrate that our approach significantly improves the rating prediction accuracy compared with various state-of-the-art models, such as LFM, HFT, CTR and RMR models. |
| Researcher Affiliation | Academia | Yunzhi Tan, Min Zhang , Yiqun Liu, Shaoping Ma State Key Laboratory of Intelligent Technology and Systems; Tsinghua National TNLIST Lab Department of Computer Science, Tsinghua University, Beijing, 100084, China cloudcompute09@gmail.com,{z-m,yiqunliu,msp}@tsinghua.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using 'open source implementation in My Media Lite' for LFM and 'the source code...released by the authors' for HFT, but does not provide specific access to the source code for the methodology described in this paper (RBLT). |
| Open Datasets | Yes | To avoid data biases, we randomly selected 80% of each dataset for training, 10% of each dataset for validation and the remaining 10% for testing |
| Dataset Splits | Yes | To avoid data biases, we randomly selected 80% of each dataset for training, 10% of each dataset for validation and the remaining 10% for testing |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'My Media Lite' and 'LDA model' but does not specify version numbers for any software dependencies. |
| Experiment Setup | No | The paper mentions minimizing an objective function with stochastic gradient descent (SGD) and using grid search for regularization parameters λ1 and λ2, but it does not provide specific hyperparameter values or detailed training configurations (e.g., learning rate, specific λ values). |