A Smooth Binary Mechanism for Efficient Private Continual Observation

Authors: Joel Daniel Andersson, Rasmus Pagh

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, a simple Python implementation of our approach outperforms the running time of the approach of Henzinger et al., as well as an attempt to improve their algorithm using high-performance algorithms for multiplication with Toeplitz matrices. and The simulation was run 5 times for each method, meaning each method has 5 data points in the plot per time step. The computation was performed for elements of dimension d = 10^4, was run on a Macbook Pro 2021 with Apple M1 Pro chip and 16 GB memory using Python 3.9.6, scipy version 1.9.2, and numpy version 1.23.3.
Researcher Affiliation Academia Joel Daniel Andersson Basic Algorithms Research Copenhagen University of Copenhagen jda@di.ku.dk, Rasmus Pagh Basic Algorithms Research Copenhagen University of Copenhagen pagh@di.ku.dk
Pseudocode Yes Algorithm 1 Prefix Sum for Binary Mechanism and Algorithm 2 Prefix Sum for Smooth Binary Mechanism
Open Source Code Yes A Python implementation of our smooth binary mechanism (and the classic binary mechanism) can be found on https://github.com/jodander/smooth-binary-mechanism.
Open Datasets No The paper focuses on algorithmic mechanisms for differential privacy under continual observation, using synthetic binary streams. It does not mention or provide access information for a publicly available or open dataset typically used for training in machine learning contexts.
Dataset Splits No The paper evaluates algorithmic mechanisms and does not involve training models with dataset splits. Therefore, it does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology).
Hardware Specification Yes The computation was performed for elements of dimension d = 10^4, was run on a Macbook Pro 2021 with Apple M1 Pro chip and 16 GB memory using Python 3.9.6, scipy version 1.9.2, and numpy version 1.23.3.
Software Dependencies Yes The computation was performed for elements of dimension d = 10^4, was run on a Macbook Pro 2021 with Apple M1 Pro chip and 16 GB memory using Python 3.9.6, scipy version 1.9.2, and numpy version 1.23.3.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, epochs), optimizer settings, or detailed training configurations, as its experiments focus on algorithm performance rather than model training.