Enhancing Dual-Target Cross-Domain Recommendation with Federated Privacy-Preserving Learning

Authors: Zhenghong Lin, Wei Huang, Hengyu Zhang, Jiayu Xu, Weiming Liu, Xinting Liao, Fan Wang, Shiping Wang, Yanchao Tan

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets validate that our proposed P2DTR framework achieves superior utility under a privacy-preserving guarantee on both domains. Finally, we evaluate the proposed P2DTR with extensive experiments on four real-world benchmark datasets for federated DTCDR. Extensive experimental results show that the proposed P2DTR is able to significantly improve the recommendation performance under privacy-preserving scenarios over all baselines.
Researcher Affiliation Academia 1College of Computer and Data Science, Fuzhou University, Fuzhou, China 2College of Computer Science and Technology, Zhejiang University, Hangzhou, China
Pseudocode No The paper does not contain explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes On the basis of the previous works, we build our scenarios using the chosen cross-domain recommendation datasets [Zhu et al., 2022], and the preprocessing settings with two domains (K = 2). In particular, we carry out experiments on the large-scale public Amazon datasets. Note that, we also conduct experiments under the multi-domain scenarios on Douban datasets with (K = 3), which follows [Liu et al., 2023c].
Dataset Splits Yes For each user, we use the first 40% of data as the training set, 30% data as the validation set, and 30% data as the testing set.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes For the common hyperparameters in the baselines, we adopt the same value for all the methods, such as the embedding dimension d to 128, the batch size to 1024 and the minibatch size to 128 or 256. For our proposed model P2DTR, we tune the hyperparameter λ in {102, 10, 1, 1e 1, 1e 2}, the number of prototypes in {16, 32, 64, 128} and the number of graph encoder layer in {1, 2, 3, 4}. In our model, we use Adam optimizer and the decay learning rate.