Hiding Images in Plain Sight: Deep Steganography

Authors: Shumeet Baluja

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Deep neural networks are simultaneously trained to create the hiding and revealing processes and are designed to specifically work as a pair. The system is trained on images drawn randomly from the Image Net database, and works well on natural images from a wide variety of sources. Beyond demonstrating the successful application of deep learning to hiding images, we carefully examine how the result is achieved and explore extensions. 3 Empirical Evaluation The three networks were trained as described above using Adam [23]. For simplicity, the reconstructions minimized the sum of squares error of the pixel difference, although other image metrics could have easily been substituted [24, 25]. The networks were trained using randomly selected pairs of images from the Image Net training set [26]. Quantitative results are shown in Figure 4, as measured by the SSE per pixel, per channel. The testing was conducted on 1,000 image pairs taken from Image Net images (not used in training).
Researcher Affiliation Industry Shumeet Baluja Google Research Google, Inc. shumeet@google.com
Pseudocode No The paper describes the architecture and training process but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing its source code, nor does it include a link to a code repository.
Open Datasets Yes The networks were trained using randomly selected pairs of images from the Image Net training set [26].
Dataset Splits No The paper mentions using the Image Net training set for training and 1,000 image pairs (not used in training) for testing, but it does not specify explicit training/validation/test splits, percentages, or a distinct validation set.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch or TensorFlow with their versions).
Experiment Setup No While the paper describes the network architecture (e.g., 5 convolution layers with specific filter sizes) and the optimizer (Adam), it does not provide specific hyperparameters such as learning rate, batch size, or the number of training epochs, which are crucial for reproducing the experiment setup.