Differentially Private Optimal Transport: Application to Domain Adaptation

Authors: Nam LeTien, Amaury Habrard, Marc Sebban

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform an extensive series of experiments on various benchmarks (Vis DA, Office-Home and Office-Caltech datasets) that demonstrates the efficiency of our method compared to non-private strategies.
Researcher Affiliation Academia 1Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d Optique Graduate School, Laboratoire Hubert Curien UMR 5516, F-42023, SAINT-ETIENNE, France {tien.le, amaury.habrard, marc.sebban}@univ-st-etienne.fr
Pseudocode Yes Algorithm 1 Differentially Private Optimal Transport. Algorithm 2 Differentially Private Domain Adaptation.
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the methodology described.
Open Datasets Yes We evaluate our method on three domain adaptation benchmarks from the classical Office-Caltech dataset [Saenko et al., 2010] to the more recent and challenging Vis DA [Peng et al., 2017] and Office-Home [Venkateswara et al., 2017] datasets.
Dataset Splits No The paper discusses 'whole batch setting' and 'minibatch setting' for experiments but does not provide specific details on train/validation/test dataset splits, percentages, or absolute counts for reproducibility. It mentions 'minibatch of size 128' but not how the data is partitioned into splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments.
Software Dependencies No All methods are written in Keras [Chollet, 2015] with the same target model architecture (a 3-layer neural network) for fair comparison. The coupling matrices are computed using the POT library [Flamary and Courty, 2017]. For PATE and DPDA, we use the privacy accountant tool [Abadi et al., 2016]. However, specific version numbers for Keras, POT, or the privacy accountant tool are not provided.
Experiment Setup Yes For OTDA and our method DPDA, we set the hyper-parameters λe and λg of Eq. (3) to 0.01 and 0.1, respectively. In all benchmarks, we set the dimension of the subspace of our method ℓ= k /10 and the noise-ratio σ w = 1.1. For the privacy budget, we again follow the standard of [Abadi et al., 2016; Papernot et al., 2017] by setting δ = 1/1.2ns , ε = 2 for Vis DA and ε = 8 for the other datasets, except ε = 20 if the source is DSLR or Webcam in Office-Caltech since they have too few samples (150-200 in total).