Private Rank Aggregation in Central and Local Models

Authors: Daniel Alabi, Badih Ghazi, Ravi Kumar, Pasin Manurangsi5984-5991

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we present differentially private algorithms for rank aggregation in the pure and approximate settings along with distributionindependent utility upper and lower bounds. In addition to bounds in the central model, we also present utility bounds for the local model of differential privacy.
Researcher Affiliation Collaboration 1 Harvard School of Engineering and Applied Sciences 2 Google Research, Mountain View, CA
Pseudocode Yes Algorithm 1: DPKwik Sort. Parameters: ϵ, δ (0, 1], q N Input: Π = {π1, . . . , πn}, U [m], c 0... Algorithm 2: ϵ-DP Adaptive Query Answering Algorithm in the Local Model. Parameters: ϵ > 0, q N, Partition (P1, . . . , Pm) of [n], ℓ: [m] [m] [m]... Algorithm 3: LDPKwik Sort. Parameters: ϵ > 0, τ > 0, Partition (P1, . . . , Pm) of [n], ℓ [m] [m] [m]
Open Source Code No The paper does not provide any links to open-source code or explicit statements about code availability.
Open Datasets No The paper is theoretical and does not mention the use of datasets for training, validation, or testing, nor does it provide any concrete access information for a publicly available dataset.
Dataset Splits No The paper is theoretical and does not discuss training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not specify any hardware used for experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any specific experimental setup details, hyperparameters, or system-level training settings.