StegaStyleGAN: Towards Generic and Practical Generative Image Steganography

Authors: Wenkang Su, Jiangqun Ni, Yiyan Sun

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and Analysis Datasets The entire Celeb A (Liu et al. 2015) dataset will be resized and then cropped to 322 and 1282 resolution to serve as training sets. In addition, 200k images sampled from the Lsun-bedroom (Yu et al. 2016) will be resized to 2562 resolution to serve as training sets as well. Implementation Details Our model is implemented in Pytorch and trained on 1 NVIDIA RTX 3090 GPU. The batch size is set to 16. The Adam optimizer with β1 0.0, β2 0.99 and ε 10 8 is used in training. The learning rates for the generator, discriminator, and extractor are all set as 0.0002. The hyper-parameters α and γ are set to 2 and 10, respectively. As for the λ, it should be initialized with a large value to ensure the LE be competitive with LAdv D at the early training stage, then decay it once every 50 iterations, i.e., λ λinit 0.98t Iter{50u, and stop if it less than a given lower bound. 250k iterations of Style GAN2 pre-training will be performed prior to training the proposed Stega Style GAN, with which to initialize the parameters of Stega Style GAN. Evaluation Metric Similar to the previous arts, we use Fr echet Inception Distance (FID), extraction accuracy of data (Acc), and the detection error rate of steganalyzer (Pe) to quantify the performance.
Researcher Affiliation Academia Wenkang Su1, 2, Jiangqun Ni1, 3*, Yiyan Sun1 1Sun Yat-Sen University 2Guangzhou University 3Peng Cheng Laboratory swk1004@gzhu.edu.cn, issjqni@mail.sysu.edu.cn, sunyy27@mail2.sysu.edu.cn
Pseudocode No The paper describes methods and training strategies in text and diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes More supplementary material can be found at https://github.com/vazswk/Stega Style GAN.git.
Open Datasets Yes Datasets The entire Celeb A (Liu et al. 2015) dataset will be resized and then cropped to 322 and 1282 resolution to serve as training sets. In addition, 200k images sampled from the Lsun-bedroom (Yu et al. 2016) will be resized to 2562 resolution to serve as training sets as well.
Dataset Splits No The paper mentions 'training sets' and refers to a 'test set' for evaluation, but it does not provide explicit details about a separate 'validation' dataset split or how it was used in the experimental setup.
Hardware Specification Yes Our model is implemented in Pytorch and trained on 1 NVIDIA RTX 3090 GPU.
Software Dependencies No The paper states that the model is implemented in 'Pytorch' and uses 'Adam optimizer' but does not specify version numbers for these or other software dependencies.
Experiment Setup Yes Implementation Details Our model is implemented in Pytorch and trained on 1 NVIDIA RTX 3090 GPU. The batch size is set to 16. The Adam optimizer with β1 0.0, β2 0.99 and ε 10 8 is used in training. The learning rates for the generator, discriminator, and extractor are all set as 0.0002. The hyper-parameters α and γ are set to 2 and 10, respectively. As for the λ, it should be initialized with a large value to ensure the LE be competitive with LAdv D at the early training stage, then decay it once every 50 iterations, i.e., λ λinit 0.98t Iter{50u, and stop if it less than a given lower bound.