BPAM: Recommendation Based on BP Neural Network with Attention Mechanism
Authors: Wu-Dong Xi, Ling Huang, Chang-Dong Wang, Yin-Yu Zheng, Jianhuang Lai
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on eight benchmark datasets have been conducted to evaluate the effectiveness of the proposed model. |
| Researcher Affiliation | Academia | Wu-Dong Xi1,2 , Ling Huang1,2 , Chang-Dong Wang1,2 , Yin-Yu Zheng1 and Jianhuang Lai1 1School of Data and Computer Science, Sun Yat-sen University, Guangzhou, 510006, China 2Guangdong Province Key Laboratory of Computational Science, Guangzhou, 510275, China m13719336821@163.com, huanglinghl@hotmail.com, changdongwang@hotmail.com, zhengyy.sysu@foxmail.com, stsljh@mail.sysu.edu.cn |
| Pseudocode | Yes | Algorithm 1 The algorithm framework of BPAM |
| Open Source Code | No | The paper does not provide an explicit statement about releasing open-source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | The experiments are conducted on eight realworld publicly available datasets: Movie Lens (ml-latest (ml-la), ml-1m, ml-10m)1, filmtrust2, jester (jester-data-1 (jd-1), jester-data-2 (jd-2), jester-data-3 (jd-3))3 and Movie Tweetings (MT)4. 1https://grouplens.org/datasets/movielens/ 2https://www.librec.net/datasets.html 3http://eigentaste.berkeley.edu/dataset/ 4https://github.com/sidooms/Movie Tweetings |
| Dataset Splits | No | The paper states: 'We randomly split each dataset into the training set and testing set with ratio 3:1 for each user.' It does not explicitly mention a separate validation set split. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers (e.g., library names like PyTorch 1.9, TensorFlow 2.x, or specific solver versions). |
| Experiment Setup | Yes | α is the trade-off parameter which is used to tune the importance of the global weight. ... The proposed model generates the best performance with k = 5 on most of the datasets except ml-1m. On ml-1m, the best performance is achieved with k = 10. ... Additionally, we can find that the optimal attention ratio α is around 2 to 4. ... where η (0, 1) is the learning rate |