Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation

Authors: Jinming Cui, Chaochao Chen, Lingjuan Lyu, Carl Yang, Wang Li

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on two benchmark datasets demonstrate that S3Rec improves the computation time and communication size of the state-of-the-art model by about 40 and 423 in average, respectively.
Researcher Affiliation Collaboration Jamie Cui1, Chaochao Chen2,1*, Lingjuan Lyu3, Carl Yang4, and Li Wang1 1Ant Group 2Zhejiang University 3Sony AI 4Emory University
Pseudocode Yes Figure 1: Secure matrix multiplication protocol, where Shr is a secret sharing algorithm. ... Figure 3: Our proposed S3Rec framework, where Matrix Mul stands for secure matrix multiplication protocol, Add stands for secure add protocol, Rec stands for reconstruction protocol for secret sharing. ... Figure 5: Dense-sparse Matrix Mul(X, Y) with insensitive and sensitive sparsity protocols, where we have X Rk m, Y Rm m.
Open Source Code No The paper does not provide a specific link or explicit statement about the release of its own source code for the methodology.
Open Datasets Yes Dataset. We choose two popular benchmark datasets to evaluate the performance of our proposed model, i.e., Epinions [19] and Library Thing (Lthing) [32], both of which are popularly used for evaluating social recommendation tasks.
Dataset Splits Yes We use five-fold cross-validation during experiments.
Hardware Specification Yes We run our experiments on a machine with 4-Core 2.4GHz Intel Core i5 with 16G memory, we compile our program using a modern C++ compiler (with support for C++ standard 17).
Software Dependencies Yes For additive HE scheme, we choose the implementation of libpaillier1. Also, we use Seal-PIR2 with same parameter setting as the original paper [1]. For security, we choose 128-bit computational security and 40-bit statistical security as recommended by NIST [2]. Similarly we leverage the generic ABY library3 to implement Se So Rec [5] and MPC building blocks such as addition, multiplication, and truncation. In particular, we choose 64-bit secret sharing in all our experiments.
Experiment Setup Yes Hyper-parameters. For all the model, during comparison, we set k = 10. We tune learning rate θ and regularizer parameter λ in {10 3, 10 2, ..., 101} to achieve their best values. We also report the effect of K on model performance.