PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning
Authors: Seng Pei Liew, Tsubasa Takahashi, Michihiko Ueno
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy. |
| Researcher Affiliation | Industry | Seng Pei Liew, Tsubasa Takahashi, Michihiko Ueno LINE Corporation {sengpei.liew,tsubasa.takahashi,michihiko.ueno}@linecorp.com |
| Pseudocode | Yes | Algorithm 1: PEARL Training |
| Open Source Code | No | The paper does not explicitly state that the source code for their methodology is released, nor does it provide a link. |
| Open Datasets | Yes | To test the efficacy of PEARL, we perform empirical evaluations on three datasets, namely MNIST (Le Cun et al. (2010)), Fashion-MNIST (Xiao et al. (2017)) and Adult (Asuncion & Newman (2007)). |
| Dataset Splits | No | For MNIST and Fashion-MNIST, we use the default train subset of the torchvision 5 library for training the generator, and the default subset for evaluation. |
| Hardware Specification | Yes | On a single GPU (Tesla V100-PCIE-32GB), training MNIST (with 100 epochs) requires less than 10 minutes. |
| Software Dependencies | No | The paper mentions 'torchvision', 'scikit-learn', and 'Adam optimizer' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We use Adam optimizer with learning rates of 0.01 for both the minimization and maximization objectives. Batch size is 100 (1,100) for the image datasets (tabular dataset). The number of frequencies is set to 1,000 (3,000) for MNIST and tabular datasets (Fashion-MNIST). The training iterations are 6,000, 3,000, and 8,000 for MNIST, Fashion-MNIST, and tabular datasets respectively. |