DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models
Authors: Yiwei Yang, Zheyuan Liu, Jun Jia, Zhongpai Gao, Yunhao Li, Wei Sun, Xiaohong Liu, Guangtao Zhai
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments indicate substantial improvements in our method over existing ones, particularly in aspects of versatility, password sensitivity, and recovery quality. Codes are available at https://github.com/evtricks/Diff Stega. |
| Researcher Affiliation | Collaboration | Yiwei Yang1 , Zheyuan Liu1 , Jun Jia1 , Zhongpai Gao2 , Yunhao Li1 , Wei Sun1 , Xiaohong Liu1 and Guangtao Zhai1 1Shanghai Jiao Tong University 2United Imaging Intelligence |
| Pseudocode | No | The paper describes the steps of its pipeline but does not include any formally labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at https://github.com/evtricks/Diff Stega. |
| Open Datasets | Yes | All images are from public dataset COCO [Lin et al., 2014], AFHQ [Choi et al., 2020], FFHQ [Karras et al., 2019], Celeb A-HQ [Karras et al., 2018] and Internet, center cropped and resized to 512 512. |
| Dataset Splits | No | The paper describes the Uni Stega dataset and its subsets but does not specify training, validation, and test splits (e.g., percentages or counts) for model reproduction. |
| Hardware Specification | Yes | All experiments are conducted on single Nvidia RTX 3090 GPU, requiring no additional training or fine-tuning. |
| Software Dependencies | No | The paper mentions using "pre-trained SD v1.5", "Pic X real", and "IP-Adapter-plus" but does not specify the versions of underlying software dependencies like Python, PyTorch, etc. |
| Experiment Setup | Yes | We set T = 50, and the mixing coefficient of EDICT is 0.93. We use IP-Adapter-plus [Ye et al., 2023] in Guidance Injection, and its weight factor is 1. The guidance scale of diffusion models is 1. η = 0.05 in Noise Flip. The diffusion process for ours is executed over steps [0, ξT]. We set ξ = 0.7 for experiments on style prompts and ξ = 0.6 for other prompts. |