Fixed Neural Network Steganography: Train the images, not the network
Authors: Varsha Kishore, Xiangyu Chen, Yan Wang, Boyi Li, Kilian Q Weinberger
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental Setup We evaluate FNNS on three diverse datasets a scenic image dataset Div2k (Agustsson & Timofte, 2017), a 2D object detection dataset MS-COCO (Lin et al., 2014) and a human face dataset Celeb A (Liu et al., 2015). Quantitative Comparison. We compare FNNS with Stegano GAN, the current state-of-the-art method, in Table 2. |
| Researcher Affiliation | Academia | Varsha Kishore , Xiangyu Chen , Yan Wang, Boyi Li & Kilian Weinberger Department of Computer Science Cornell University Ithaca, NY 14850, USA {vk352, xc429, yw763, bl728, kqw4}@cornell.edu |
| Pseudocode | Yes | Algorithm 1 Adversarial Attack for Message hiding 1: Inputs: decoder network F, cover image X, secret message M 2: Hyper-parameters: learning rate α > 0, perturbation bound ϵ > 0, optimization steps n > 0, max L-BFGS iterations k > 0 3: X X 4: for n iterations do 5: X = LBFGS(F( X), M, LBCE, k) Take k steps to optimize LBCE(F( X), M). 6: δ clip ϵ ϵ{ X X} Clip pixel value changes exceeding ϵ. 7: X clip1 0{X + δ} Clip pixel values to [0, 1]. 8: return X |
| Open Source Code | Yes | Our code is available at https://github.com/varshakishore/FNNS. |
| Open Datasets | Yes | Experimental Setup We evaluate FNNS on three diverse datasets a scenic image dataset Div2k (Agustsson & Timofte, 2017), a 2D object detection dataset MS-COCO (Lin et al., 2014) and a human face dataset Celeb A (Liu et al., 2015). |
| Dataset Splits | Yes | For each dataset, we use the provided test/validation images (if unavailable, we use the first 100 images in the dataset for validation). |
| Hardware Specification | Yes | Table 8 shows the amount of time required to encode a message with different FNNS variants and different bit rates (with standard deviations in parentheses) on a NVIDIA GTX 1080 GPU. |
| Software Dependencies | No | The paper does not explicitly list software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | The hyper-parameters used for Algorithm 1 are as follows: perturbation bound ϵ = 0.3, optimization steps n = 100, and L-BFGS iterations k = 10 with early stopping if the output has zero error. In cases where the image quality of X is poor, we restart optimization with a different learning rate α. Concretely, we set the learning rate to 0.1 and change it to 0.05 or 0.5 if the output image gets a PSNR lower than 20. We train Stegano GAN models for only one epoch for FNNS-D and FNNS-DE, as we observe that a fully-trained (32 epochs) Stegano GAN decoder over-fits to its training objective such that it s hard to use it for FNNS. |