Lower Bound of Locally Differentially Private Sparse Covariance Matrix Estimation

Authors: Di Wang, Jinhui Xu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we study the sparse covariance matrix estimation problem in the local differential privacy model, and give a non-trivial lower bound on the non-interactive private minimax risk in the metric of squared spectral norm. We show that the lower bound is actually tight, as it matches a previous upper bound. Our main technique for achieving this lower bound is a general framework, called General Private Assouad Lemma, which is a considerable generalization of the previous private Assouad lemma and can be used as a general method for bounding the private minimax risk of matrix-related estimation problems.
Researcher Affiliation Academia Di Wang , Jinhui Xu Department of Computer Science and Engineering State University of New York at Buffalo, NY, USA. {dwang45,jinhui}@buffalo.edu.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. It focuses on theoretical proofs and mathematical derivations.
Open Source Code No The paper does not provide any statement about making its source code available or include links to a code repository.
Open Datasets No The paper is theoretical and does not involve experimental evaluation on datasets. Therefore, it does not mention public datasets for training.
Dataset Splits No The paper is theoretical and does not involve experimental evaluation on datasets. Therefore, it does not mention validation dataset splits.
Hardware Specification No The paper is theoretical and does not describe experimental work. Therefore, it does not mention any hardware specifications.
Software Dependencies No The paper is theoretical and does not describe experimental work. Therefore, it does not list any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experiments. Therefore, it does not provide details about an experimental setup, hyperparameters, or training configurations.