Differentially Private Sliced Wasserstein Distance

Authors: Alain Rakotomamonjy, Ralaivola Liva

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we provide some numerical results showing how our differentially private Sliced Wasserstein Distance works in practice. The code for reproducing some of the results is available in https://github.com/arakotom/dp_swd. and Section 6.2 Domain Adaptation We conduct experiments for evaluating our DP-SWD distance in the context of classical unsupervised domain adaptation (UDA) problems such as handwritten digit recognitions (MNIST/USPS), synthetic to real object data (Vis DA 2017) and Office 31 datasets.
Researcher Affiliation Collaboration 1Criteo AI Lab, Paris, France 2LITIS EA4108, Universit e de Rouen Normandie, Saint-Etienne du Rouvray, France.
Pseudocode Yes Algorithm 1 Private and Smoothed Sliced Wasserstein Distance and Algorithm 2 Differentially private DANN with DP-SWD
Open Source Code Yes The code for reproducing some of the results is available in https://github.com/arakotom/dp_swd.
Open Datasets Yes We conduct experiments for evaluating our DP-SWD distance in the context of classical unsupervised domain adaptation (UDA) problems such as handwritten digit recognitions (MNIST/USPS), synthetic to real object data (Vis DA 2017) and Office 31 datasets.
Dataset Splits No The paper mentions training on source datasets and evaluating on target datasets, but does not provide specific percentages or counts for train/validation/test splits for any of the datasets used in experiments.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using 'the Py Torch package (Xiang, 2020)' and an 'Adam optimizer' but does not specify version numbers for PyTorch or any other software components.
Experiment Setup Yes For all methods and for each dataset, we used the same neural network architecture for representation mapping and for classification... All models are compared with the same fixed budget of privacy (ε, δ) = (10, 10 5)... trained over 100 epochs with an Adam optimizer and batch size of 100. For our DP-SWD we have used 1000 random projections and the output dimension is the classical 28 28 = 784. and We have used k = 2000 projections which leads to a ratio k/d < 0.25. Noise variance σ and privacy loss over 100 iterations have been evaluated using the Py Torch package of (Wang et al., 2019) and have been calibrated for ϵ = 10 and δ = 10 6, since the number of training samples is of the order of 170K.