Steganography of Steganographic Networks
Authors: Guobiao Li, Sheng Li, Meiling Li, Xinpeng Zhang, Zhenxing Qian
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Various experiments have been conducted to demonstrate the advantage of our proposed method for covert communication of steganographic networks as well as general DNN models. |
| Researcher Affiliation | Academia | Guobiao Li, Sheng Li*, Meiling Li, Xinpeng Zhang*, Zhenxing Qian Fudan University {gbli20, lisheng, mlli20, zhangxinpeng, zxqian}@fudan.edu.cn |
| Pseudocode | Yes | Algorithm 1: Progressive Model Disguising Algorithm. |
| Open Source Code | No | The paper does not include any statement about releasing open-source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We randomly select 11000 images from the COCO dataset 1 to form the secret dataset... We use the GTSRB (Stallkamp et al. 2012) classification task as the stego task... For Res Net18, we assume the Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017) classification task as the secret task and take the CIFAR10 (Krizhevsky, Hinton et al. 2009) classification task as the stego task... U-net is trained/tested on a secret dataset with 10000/1000 images randomly selected from Image Net (Deng et al. 2009)... where the oxford-pet dataset (Parkhi et al. 2012) is adopted as the stego dataset. |
| Dataset Splits | Yes | We randomly select 11000 images from the COCO dataset 1 to form the secret dataset which is split into 10000/1000 images for training and testing. We use the GTSRB dataset as the stego dataset, where 80%/20% of the images are randomly selected for training/testing. For both the secret and stego datasets, we use their default partitions for training and testing... We randomly separate the Oxford-Pet dataset into three parts, including 6000 images for training, 1282 images for validating and 100 images for testing. |
| Hardware Specification | Yes | All our experiments are conducted on Ubuntu 18.04 system with four NVIDIA RTX 1080 Ti GPUs. |
| Software Dependencies | No | The paper mentions 'All the models take the BN as normalization layer and are optimized using Adam (Kingma and Ba 2014) optimizer', but it does not specify versions for software dependencies like programming languages or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For both SSN and SGN, we set λg=0.01, λe=λt=0.001, λp=0.9, τst=0.01. For τse, we set 0.0001, 0.01, and 0.5 for the secret decoder of Hi Dde N, Res Net18 and U-net, respectively. We reinitialize the remaining filters by Kaiming initialization (He et al. 2015). All the models take the BN as normalization layer and are optimized using Adam (Kingma and Ba 2014) optimizer. |