Differentially Private Regret Minimization in Episodic Markov Decision Processes

Authors: Sayak Ray Chowdhury, Xingyu Zhou6375-6383

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study regret minimization in finite horizon tabular Markov decision processes (MDPs) under the constraints of differential privacy (DP). This is motivated by the widespread applications of reinforcement learning (RL) in real-world sequential decision making problems, where protecting users sensitive and private information is becoming paramount. We consider two variants of DP joint DP (JDP), where a centralized agent is responsible for protecting users sensitive data and local DP (LDP), where information needs to be protected directly on the user side. We first propose two general frameworks one for policy optimization and another for value iteration for designing private, optimistic RL algorithms. We then instantiate these frameworks with suitable privacy mechanisms to satisfy JDP and LDP requirements, and simultaneously obtain sublinear regret guarantees. The regret bounds show that under JDP, the cost of privacy is only a lower order additive term, while for a stronger privacy protection under LDP, the cost suffered is multiplicative. Finally, the regret bounds are obtained by a unified analysis, which, we believe, can be extended beyond tabular MDPs.
Researcher Affiliation Academia 1 Indian Institute of Science, Bangalore, India 2 ECE Department, Wayne State University, Detroit, USA sayak@iisc.ac.in, xingyu.zhou@wayne.edu
Pseudocode Yes Algorithm 1: PRIVATE-UCB-PO; Algorithm 2: PRIVATE-UCB-VI
Open Source Code No The paper is theoretical and does not mention releasing any source code for its methodology.
Open Datasets No The paper focuses on theoretical analysis and does not involve empirical experiments with datasets.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets, so no dataset splits are mentioned.
Hardware Specification No The paper focuses on theoretical analysis and does not report on experimental hardware specifications.
Software Dependencies No The paper focuses on theoretical analysis and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper focuses on theoretical analysis and does not describe any experimental setup details such as hyperparameters or training configurations.