Private Image Reconstruction from System Side Channels Using Generative Models

Authors: Yuanyuan Yuan, Shuai Wang, Junping Zhang

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluation of two popular side channels shows that the reconstructed images consistently match user inputs, making privacy leakage attacks more practical. Our evaluations show that the proposed framework can generate images with vivid details and are closely similar to reference inputs. The reconstructed images show high discriminability, making privacy leakage attacks more practical.
Researcher Affiliation Academia Yuanyuan Yuan & Shuai Wang Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water bay, Hong Kong SAR {yyuanaq,shuaiw}@cse.ust.hk Junping Zhang Department of Computer Science Fudan University Shanghai, China jpzhang@fudan.edu.cn
Pseudocode No The paper includes architectural tables (Table 4, 5, 6, 7) that describe network layers and output shapes, but these are not structured pseudocode or algorithm blocks. Figure 26 shows a dataflow diagram, not pseudocode.
Open Source Code Yes Our code is at https://github.com/gen SCA/gen SCA.
Open Datasets Yes (i) Large-scale Celeb Faces Attributes (Celeb A) (Liu et al., 2015b) contains about 200K celebrity face images. (ii) KTH Human Actions (KTH) (Laptev & Lindeberg, 2004) contains videos of six actions made by 25 persons in 4 directions. (iii) LSUN Bedroom Scene (LSUN) (Yu et al., 2015) contains images of typical bedroom scenes.
Dataset Splits No The paper specifies training and testing splits, but does not explicitly mention a separate validation dataset split with specific percentages or counts.
Hardware Specification Yes We ran our experiments on an Intel Xeon CPU E5-2678 with 256 GB of RAM and one Nvidia Ge Force RTX 2080 GPU.
Software Dependencies Yes We implement our framework in Pytorch (ver. 1.5.0).
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2014) with learning rate ηV AE LP = 0.0001 for the VAE-LP module, and learning rate ηGAN = 0.0002 for the GAN module. We set β1 = 0.5, and β2 = 0.999 for both modules. β in Loss V AE LP is 0.0001, and γ in Loss GAN is 100. Minibatch size is 50. The training is completed at 200 iterations (100 iterations for the VAE-LP module, and 100 iterations for the GAN module).