Local Differential Privacy for Belief Functions

Authors: Qiyu Li, Chunlai Zhou, Biao Qin, Zhiqiang Xu10025-10033

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation experiments are carried out to verify the trade-off in the privacy mechanism.
Researcher Affiliation Academia Qiyu Li1, Chunlai Zhou1*, Biao Qin1, Zhiqiang Xu2 1 Computer Science Dept., Renmin University of China, Beijing, CHINA 2 Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE {qiyuli,czhou,qinbiao}@ruc.edu.cn, zhiqiangxu2001@gmail.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about making its source code available or provide a link to a code repository.
Open Datasets No A simple random sample of n people is drawn with replacement from the population. Let Zi denote the i-th sample element.
Dataset Splits No We set the sample size to be 10, 100, 500, 1000 and fix q3 = 0.1.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processors, or memory used for running experiments.
Software Dependencies No The paper does not provide any specific software details, such as library names or solver versions.
Experiment Setup Yes Simulation experiments are carried out to verify the trade-off in the privacy mechanism. In order to reduce the sampling error on the experimental results, the following results are the average of 1000 experimental outcomes. We set the sample size to be 10, 100, 500, 1000 and fix q3 = 0.1. Let π be the true proportion of the people having property P. A sample of Y1, , Yn of respondents are drawn with replacement from the population and their responses are distributed i.i.d. according to (q1, q2) = (π, 1 π)QW. Q2 3 = p q 1 p q q p 1 p q where p, q [0, 1].