Collaborative Multi-Level Embedding Learning from Reviews for Rating Prediction
Authors: Wei Zhang, Quan Yuan, Jiawei Han, Jianyong Wang
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations on real datasets show CMLE outperforms several competitive methods and can solve the two limitations well. |
| Researcher Affiliation | Academia | Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, USA Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, China |
| Pseudocode | No | The paper describes the model learning process textually and via mathematical equations, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on several real datasets which are publicly available [Mc Auley and Leskovec, 2013]. |
| Dataset Splits | Yes | For later comparisons, we randomly split the three datasets into train, validation, and test sets with the ratio of 7 to 1 to 2, respectively. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions using the 'Stanford log-linear POS tagger' but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | For CMLE, we initialize the learning rate = 0.2, regularization hyper-parameters to be 0.1 (same for other factor based methods such as BMF), and the relative weight = 0.1. All experiments are conducted with embedding dimension K = 40. |