Modeling Deep Learning Based Privacy Attacks on Physical Mail

Authors: Bingyao Huang, Ruyi Lian, Dimitris Samaras, Haibin Ling1593-1601

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show experimentally that hidden content details, such as texture and image structure, can be clearly recovered. Experimental Evaluations In this section, we quantitatively and qualitatively evaluate and compare the proposed Neural-STE with PSDNet (Guo et al. 2020), a learning-based image through scattering media method, Pix2pix (Isola et al. 2017), a general GANbased image-to-image translation model, Pix2pix HD (Wang et al. 2018), an improved version of Pix2pix, and degraded versions of the proposed method.
Researcher Affiliation Academia Bingyao Huang, Ruyi Lian, Dimitris Samaras, Haibin Ling Stony Brook University, NY, USA {bihuang, rulian, samaras, hling}@cs.stonybrook.edu
Pseudocode No The paper describes the model architecture and components in text and diagrams (Figure 2), but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The source code, benchmark dataset and experimental results are publicly available at https://github.com/Bingyao Huang/Neural-STE.
Open Datasets Yes The collected data is available as our Neural-STE dataset. We implement Neural-STE using Py Torch (Paszke et al. 2017) and Kornia (Riba et al. 2019), and optimize it using the Adam optimizer (Kingma and Ba 2015). The source code, benchmark dataset and experimental results are publicly available at https://github.com/Bingyao Huang/Neural-STE.
Dataset Splits No For each setup, we split the captured 500 image pairs into 450 training samples and 50 testing samples. (Only mentions training and testing samples, no explicit validation split.)
Hardware Specification Yes Then, we train the model for 4,000 iterations on three Nvidia Ge Force 1080Ti GPUs with a batch size of 16, taking about 18 minutes to train.
Software Dependencies No We implement Neural-STE using Py Torch (Paszke et al. 2017) and Kornia (Riba et al. 2019), and optimize it using the Adam optimizer (Kingma and Ba 2015). The paper mentions Py Torch and Kornia but does not specify exact version numbers for these libraries.
Experiment Setup Yes The proposed setup consists of a Canon 6D camera with the resolution set to 320 240. The initial learning rate and penalty factor are set to 10 3 and 5 10 4, respectively. Then, we train the model for 4,000 iterations on three Nvidia Ge Force 1080Ti GPUs with a batch size of 16, taking about 18 minutes to train.