Generating steganographic images via adversarial training

Authors: Jamie Hayes, George Danezis

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our scheme on two independent image datasets, showing our novel method of studying steganographic problems is surprisingly competitive against established steganographic techniques. As a proof of concept, we implemented our adversarial training scheme on two image datasets: celebrity faces in the wild (celeb A) [14] and a standard steganography research dataset, BOSS.
Researcher Affiliation Academia Jamie Hayes University College London j.hayes@cs.ucl.ac.uk George Danezis University College London The Alan Turing Institute g.danezis@ucl.ac.uk
Pseudocode No No structured pseudocode or algorithm blocks were found. The paper describes the models' architectures in prose.
Open Source Code No No explicit statement about releasing code or a direct link to a code repository for the methodology was found.
Open Datasets Yes As a proof of concept, we implemented our adversarial training scheme on two image datasets: celebrity faces in the wild (celeb A) [14] and a standard steganography research dataset, BOSS2. 2http://agents.fel.cvut.cz/boss/index.php?mode=VIEW&tmpl=materials
Dataset Splits No For both the BOSS and Celeb A datasets, we use 10, 000 samples and split in half, creating a training set and a test set. Alice was then trained on the 5000 samples from the training set. No explicit validation set is mentioned for hyperparameter tuning or early stopping.
Hardware Specification Yes All experiments in this section were performed in Tensor Flow [1, 3], on a workstation with a Tesla K40 GPU card.
Software Dependencies No The paper mentions 'Tensor Flow [1, 3]' but does not provide specific version numbers for TensorFlow or any other software libraries used.
Experiment Setup Yes We train in batches of 32, and use the Adam optimizer [11] with a learning rate of 2 × 10−4. At each batch we alternate training either Alice and Bob, or Eve. For each experiment, we performed grid search to find the optimum loss weights, λA, λB, λE, for Alice.