Differentially Private Online-to-batch for Smooth Losses

Authors: Qinzi Zhang, Hoang Tran, Ashok Cutkosky

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of our algorithm on three different datasets: two benchmark datasets (MNIST and CIFAR-10) and one real-world dataset (Adult). In this section, we present numerical results on the three datasets.
Researcher Affiliation Collaboration Sayan Das, Arpit Agarwal, Andrew R. Cohen MIT Lincoln Laboratory, Anand D. Sarwate Rutgers University, ECE Dept.
Pseudocode Yes Algorithm 1 DP-Online-to-Batch, Algorithm 2 Private Online Gradient Descent (DP-OGD)
Open Source Code No The paper does not provide an explicit statement about the availability of source code or a link to a code repository.
Open Datasets Yes We evaluate the performance of our algorithm on three different datasets: two benchmark datasets (MNIST and CIFAR-10) and one real-world dataset (Adult).
Dataset Splits Yes For MNIST, we split the 60,000 samples into 50,000 training samples and 10,000 testing samples. For CIFAR-10, we use 50,000 training samples and 10,000 testing samples. For the Adult dataset, we split the 48842 samples into 32561 training samples and 16281 testing samples.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No Our code is implemented in PyTorch. The paper does not specify the version of PyTorch or any other software dependencies.
Experiment Setup Yes For MNIST and CIFAR-10, we train our model for 500 epochs with a batch size of 128. For the Adult dataset, we train for 1000 epochs with a batch size of 128. The learning rate for DP-SGD and DP-OGD is set to 0.01. The step size η for Algorithm 1 is set to 100.