Differentially Private Covariance Revisited

Authors: Wei Dong, Yuting Liang, Ke Yi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that they offer significant improvements over prior work.
Researcher Affiliation Academia Wei Dong, Yuting Liang, Ke Yi {wdongac,yliangbs,yike}@cse.ust.hk Department of Computer Science Hong Kong University of Science and Technology
Pseudocode Yes Algorithm 1 Separate Cov
Open Source Code Yes The code can be found at https://github.com/hkust DB/Private Covariance.
Open Datasets Yes The first dataset is the MNIST [27] dataset, which contains images of handwritten digits. We use its training dataset which contains 60, 000 images represented as vectors in Zd 255, where d 784 28 ˆ 28. These vectors are normalized by 255 ? d in the experiments. ... [27] Yann Le Cun, Corinna Cortes, and Christopher J.C. Burges. The mnist database of handwritten digits, 1998. Available online at: http://yann.lecun.com/exdb/mnist/. Last accessed: May. 2022.
Dataset Splits No The paper mentions using a 'training dataset' but does not specify any train/validation/test splits or cross-validation methodology for reproduction.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper.
Software Dependencies No The paper mentions implementation in Python and the use of the 'scikit-learn package' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The ρ here is fixed at 0.1 and we examine the error growth w.r.t. d for n 1000, 4000, 16000. ... default values d 200, n 50000, N 4 and ρ 0.1 ... Each experiment is repeated 50 times, and we report the average error. ... we scale all datasets such that 0.5 ď radp Xq ď 1. ... The parameter s characterizes the skewness, which we fix as s 3.