A^3NCF: An Adaptive Aspect Attention Model for Rating Prediction
Authors: Zhiyong Cheng, Ying Ding, Xiangnan He, Lei Zhu, Xuemeng Song, Mohan Kankanhalli
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on several large-scale datasets, we demonstrate that our model outperforms the state-of-the-art review-aware recommender systems in the rating prediction task. |
| Researcher Affiliation | Collaboration | 1 School of Computing, National University of Singapore, Singapore 2 Vipshop US Inc., San Jose, CA, USA 3 School of Information Science and Engineering, Shandong Normal University, China 4 School of Computer Science and Technology, Shandong University, China |
| Pseudocode | Yes | Algorithm 1: Generation Process of Our Topic Model. |
| Open Source Code | No | The paper does not include any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We conducted experiments on two publicly accessible datasets: the Amazon Product Review dataset [Mc Auley and Leskovec, 2013]4 and the Yelp Dataset 2017.5 The first dataset contains reviews and metadata of diverse products from Amazon. It contains 24 product categories. We adopted five categories and took the 5-core version for experiments, where each user or item has at least 5 interactions. The five categories are of different sizes and sparsity degrees (as shown in Table 1). The Yelp dataset contains reviews of local businesses in 12 metropolitan areas across 4 countries. |
| Dataset Splits | Yes | We randomly split each dataset into training, validation, and testing sets with ratio 8:1:1 for each user as in [Mc Auley and Leskovec, 2013; Ling et al., 2014; Catherine and Cohen, 2017]. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions optimization methods like 'Adam optimization method' and activation functions like 'Re LU', but it does not specify any software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions) needed for reproducibility. |
| Experiment Setup | Yes | The number of MLP layers L in the rating prediction part of A3NCF is set to 2 in our implementation. Besides, the dropout technique [Srivastava et al., 2014] is used to prevent overfitting and the dropout ratio is 0.5. The learning rate is set to 0.001 for all the datasets and the Adam optimization method [Kingma and Ba, 2014] is used. |