Differentially Private Normalizing Flows for Synthetic Tabular Data Generation
Authors: Jaewoo Lee, Minjung Kim, Yonghyun Jeong, Youngmin Ro7345-7353
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluations show that the proposed model preserves statistical properties of original dataset better than other baselines. |
| Researcher Affiliation | Collaboration | 1University of Georgia 2Samsung SDS jwlee@cs.uga.edu, {mj100.kim, yhyun.jeong, youngmin.ro}@samsung.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | To evaluate the performance of DP-HFlow, we perform experiments on four real datasets: Adult, Census, Covertype, Intrusion. |
| Dataset Splits | No | The paper mentions evaluating on a 'testing' set but does not provide specific details on training, validation, and test splits (e.g., percentages or counts) or a general splitting methodology. |
| Hardware Specification | Yes | All experiments were performed on a server with an NVIDIA RTX 8000 GPU. |
| Software Dependencies | No | The paper mentions "Ada Belief optimizer (Zhuang et al. 2020)" but does not specify software names with version numbers for replication. |
| Experiment Setup | Yes | In all experiments, DP-HFlow is instantiated by stacking 3 blocks of autoregressive spline transformation and low rank-based linear transformation on top of dequantization layer. A reverse ordering permutations is inserted in between blocks. We used Ada Belief optimizer (Zhuang et al. 2020) with learning rate 0.001 and default smoothing parameter of β1 = 0.9 and β2 = 0.999. |