Learning Iterative Neural Optimizers for Image Steganography

Authors: Xiangyu Chen, Varsha Kishore, Kilian Q Weinberger

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the efficacy of LISO extensively across multiple datasets. We demonstrate that at test-time, with unseen cover images and random bit strings, the optimizer can reliably circumvent bad local minima and find a low-error solution within only a few iterative steps that already outperforms all previous encoder-decoder-based approaches.
Researcher Affiliation Academia Xiangyu Chen , Varsha Kishore & Kilian Q Weinberger Department of Computer Science Conell University Ithaca, NY 14850, USA {xc429,vk352,kqw4}@cornell.edu
Pseudocode Yes Algorithm 1 Iterative Optimization
Open Source Code Yes The code for LISO is available at https://github.com/cxy1997/LISO.
Open Datasets Yes We evaluate our method on three public datasets: 1) Div2k (Agustsson & Timofte, 2017) which is a scenic images dataset, 2) Celeb A (Liu et al., 2018) which consists of facial images of celebrities, and 3) MS COCO (Lin et al., 2014) which contains images of common household objects and scenes.
Dataset Splits Yes For Celeb A and MS COCO we use the first 1000 for validation and the following 1,000 for training.
Hardware Specification Yes The reported times were the average times on Div2k s validation set and the methods were run on a Nvidia Titan RTX GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as Python, PyTorch, TensorFlow, or other libraries. It only implies their use through the nature of the research.
Experiment Setup Yes During training, we set the number of encoder iterations T = 15, the step size η = 1, the decay γ = 0.8 and loss weights λ = µ = 1. During inference, we use a smaller step size η = 0.1 for a larger number of iterations T; we iterate until the error rate converges.