PPGenCDR: A Stable and Robust Framework for Privacy-Preserving Cross-Domain Recommendation

Authors: Xinting Liao, Weiming Liu, Xiaolin Zheng, Binhui Yao, Chaochao Chen

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive empirical studies on Douban and Amazon datasets demonstrate that PPGen CDR significantly outperforms the state-of-the-art recommendation models while preserving privacy. In this section, we aim to answer the following questions through empirical studies: Q1: Can PPGen CDR outperform existing single-domain recommendation models, the stateof-the-art (SOTA) CDR models in plaintext, and the SOTA PPCDR models? Q2: How can GS and RC contribute to the performance of PPGen CDR? Q3: How can SPP preserve privacy in PPGen CDR in a cost-effective way? Q4: How do hyper-parameters impact PPGen CDR?
Researcher Affiliation Collaboration Xinting Liao1, Weiming Liu1, Xiaolin Zheng1, Binhui Yao2, Chaochao Chen1* 1College of Computer Science and Technology, Zhejiang University, China 2Midea Group, Foshan, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes Datasets. We use two datasets, i.e., Amazon (Ni, Li, and Mc Auley 2019) and Douban (Zhu et al. 2021b).
Dataset Splits No The paper mentions using "train", "validation", and "test" in the context of model components and evaluations. However, it does not explicitly provide details about how the datasets are split into training, validation, and test sets (e.g., specific percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions optimizers like "RMSprop" and "Adam", but it does not specify software dependencies with version numbers (e.g., Python version, specific deep learning frameworks like PyTorch or TensorFlow, or other libraries).
Experiment Setup Yes We set batch size N as 128, learning rate η = 0.01 for Douban, and η = 0.0005 for Amazon. We set clipping constant B = 1, and the dimension of latent features K = 200. We compare the performance by varying the hyper-parameter of alignment λA {0.25, 0.5, 1, 5, 10, 20} in (a), the parameter of GS τ {0.1, 0.25, 0.5, 1, 2, 5, 10} in (b), and the hyper-parameter of robustness λR {0.1, 0.25, 0.5, 1, 10, 20, 30} in (c), respectively.