Differentially Private Matrix Completion Revisited

Authors: Prateek Jain, Om Dipakbhai Thakkar, Abhradeep Thakurta

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now present empirical results for Private FW (Algorithm 1) on several benchmark datasets, and compare its performance to state-of-the-art methods like (Mc Sherry & Mironov, 2009), and private as well as non-private variant of the Projected Gradient Descent (PGD) method (Cai et al., 2010c; Bassily et al., 2014a; Abadi et al., 2016). In all our experiments, we see that private FW provides accuracy very close to that of the non-private baseline, and almost always significantly outperforms both the private baselines.
Researcher Affiliation Collaboration Prateek Jain 1 Om Thakkar 2 Abhradeep Thakurta 3 1Microsoft Research. Email: prajain@microsoft.com 2Department of Computer Science, Boston University. Email: omthkkr@bu.edu 3Computer Science Department, University of California Santa Cruz. Email: aguhatha@ucsc.edu.
Pseudocode Yes Algorithm 1 Private Frank-Wolfe algorithm
Open Source Code No The paper mentions "The full version of this work is available at https://arxiv.org/abs/1712.09765." but does not provide any explicit statement or link for the release of their source code.
Open Datasets Yes Empirical results: Finally, we show that along with providing strong analytical guarantees, our private FW also performs well empirically. In particular, we show its efficacy on benchmark collaborative filtering datasets like Jester (Goldberg et al., 2001), Movie Lens (Harper & Konstan, 2015), the Netflix prize dataset (Bennett et al., 2007), and the Yahoo! Music recommender dataset (Yahoo, 2011).
Dataset Splits No For all datasets, we randomly sample 1% of the given ratings for measuring the test error. The paper does not explicitly mention a separate validation split or the percentages for training, validation, and test sets.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory, or cloud instances) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or library versions).
Experiment Setup Yes We vary the privacy parameter ϵ [0.1, 5] 4, but keep δ = 10 6, thus ensuring that δ < 1 m for all datasets. Moreover, we report results averaged over 10 independent runs.