Practical Cross-System Shilling Attacks with Limited Access to Data
Authors: Meifang Zeng, Ke Li, Bingchuan Jiang, Liujuan Cao, Hui Li
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have demonstrated the superiority of PC-Attack over state-of-the-art baselines. Our implementation of PC-Attack is available at https://github.com/KDEGroup/PC-Attack. We conduct extensive experiments to demonstrate that PC-Attack exceeds state-of-the-art methods w.r.t. attack power and attack invisibility. |
| Researcher Affiliation | Academia | Meifang Zeng1, Ke Li2, Bingchuan Jiang2, Liujuan Cao1, Hui Li1* 1 School of Informatics, Xiamen University 2 PLA Strategic Support Force Information Engineering University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | Our implementation of PC-Attack is available at https://github.com/KDEGroup/PC-Attack. |
| Open Datasets | Yes | We use four public datasets widely adopted in previous works on shilling attacks (Lin et al. 2020, 2022), including Film Trust, Yelp and two other Amazon datasets Automotive, and Tools & Home Improvement (T & HI). Tab. 2 illustrates the statistics of the data. |
| Dataset Splits | No | Default training/test split is used for training and tuning surrogate RS models (if baselines require a surrogate RS) and victim RS models. The paper mentions a default training/test split but does not specify exact percentages or sample counts for validation, training, and testing splits, nor does it provide citations to predefined splits with full details for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions "Adam optimizer is adopted for optimization" but does not provide specific version numbers for Adam, Python, any deep learning frameworks (like PyTorch or TensorFlow), or other software libraries used. |
| Experiment Setup | Yes | For PC-Attack, we set training epochs to 32, batch size to 32, embedding size to 64 and learning rate to 0.005. z and y used in crafting profiles are set to 50 and 10, respectively. The length of random walk is set to 64 and the restart probability 1 α is 0.8. The number of GIN layers ˆb is 5. Other hyper-parameters of PC-Attack are selected through grid search and the chosen hyper-parameters are: τ = 0.07, λg = 0.5, λs = 0.5, ηg = 0.5, ηs = 0.5, µuser = 0.5, and µitem = 0.5. By default, we set p = 10% when collecting target data. Adam optimizer is adopted for optimization. |