Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Rating-Boosted Latent Topics: Understanding Users and Items with Ratings and Reviews

Authors: Yunzhi Tan, Min Zhang, Yiqun Liu, Shaoping Ma

IJCAI 2016 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on 26 real-world datasets from Amazon demonstrate that our approach significantly improves the rating prediction accuracy compared with various state-of-the-art models, such as LFM, HFT, CTR and RMR models.
Researcher Affiliation Academia Yunzhi Tan, Min Zhang , Yiqun Liu, Shaoping Ma State Key Laboratory of Intelligent Technology and Systems; Tsinghua National TNLIST Lab Department of Computer Science, Tsinghua University, Beijing, 100084, China EMAIL,EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using 'open source implementation in My Media Lite' for LFM and 'the source code...released by the authors' for HFT, but does not provide specific access to the source code for the methodology described in this paper (RBLT).
Open Datasets Yes To avoid data biases, we randomly selected 80% of each dataset for training, 10% of each dataset for validation and the remaining 10% for testing
Dataset Splits Yes To avoid data biases, we randomly selected 80% of each dataset for training, 10% of each dataset for validation and the remaining 10% for testing
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'My Media Lite' and 'LDA model' but does not specify version numbers for any software dependencies.
Experiment Setup No The paper mentions minimizing an objective function with stochastic gradient descent (SGD) and using grid search for regularization parameters λ1 and λ2, but it does not provide specific hyperparameter values or detailed training configurations (e.g., learning rate, specific λ values).