Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios
Authors: Dazhong Rong, Qinming He, Jianhai Chen
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real world datasets demonstrate that our attacks are effective and outperform all baseline attacks. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Zhejiang University |
| Pseudocode | No | The paper describes attack flows with numbered steps and equations, but it does not present them within a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | 1https://github.com/rdz98/PoisonFedDLRS |
| Open Datasets | Yes | We experiment with two popular and publicly accessible datasets: Movie Lens (ML) and Amazon Digital Music (AZ). |
| Dataset Splits | Yes | In both datasets, we convert the user-item interactions (i.e., ratings and reviews) into implicit data following [He et al., 2017], and divide each user s interactions into training set and test set in the ratio of 4 : 1. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models, memory specifications, or types of computing resources used for the experiments. |
| Software Dependencies | No | The paper mentions using NCF as the base recommender model but does not specify version numbers for any software dependencies, libraries, or frameworks used for implementation. |
| Experiment Setup | Yes | The dimensions of both hidden layer are set to 8. The dimensions of users embedding vectors and items embedding vectors are also set to 8. The learning rate η for both benign users and malicious users is set to 0.001. The base recommender model is federated trained for 30 epochs on both ML and AZ to ensure convergence for recommendation. Moreover, we set r, T, n, σ, ξ and β to 4, 1, 10, 0.01, 0.001 and 30, respectively. |