Neural Cover Selection for Image Steganography

Authors: Karl Chahine, Hyeji Kim

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our methodology through comprehensive experimentation on public datasets such as Celeb A-HQ, Image Net, and AFHQ. Our results demonstrate that the error rates of the optimized images are an order of magnitude lower than those of the original images under specific conditions.
Researcher Affiliation Academia Karl Chahine & Hyeji Kim Department of Electrical and Computer Engineering University of Texas at Austin Austin, TX 78712 {karlchahine, hyeji.kim}@utexas.edu
Pseudocode Yes Algorithm 1 Iterative Optimization
Open Source Code Yes Our code can be found at https://github.com/karlchahine/Neural-Cover-Selection-for Image-Steganography.
Open Datasets Yes We validate our methodology through comprehensive experimentation on public datasets such as Celeb A-HQ (Karras et al. [2017]), Image Net (Russakovsky et al. [2015]), and AFHQ (Choi et al. [2020]).
Dataset Splits No The paper mentions '1000 training images from each class' and implicitly refers to test evaluation, but it does not explicitly detail a separate validation set split (e.g., percentages, counts, or a cross-validation setup) for model tuning or performance assessment.
Hardware Specification Yes All experiments were conducted using a NVIDIA A-100 GPU.
Software Dependencies No The paper mentions using specific models like 'Big GAN' and 'LISO encoder-decoder pairs' from cited works, but it does not specify software versions for programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions) used in the implementation.
Experiment Setup Yes To optimize the latent vector z, we minimize the binary cross-entropy loss BCE(m, ˆm) using the Adam optimizer with a learning rate of 0.01 over 100 epochs. We configure our model with the following parameters: E = 50 epochs, T = 40 time steps, and N = 6 iterations per epoch. For optimization, we employ the Adam optimizer with a learning rate of 2E 06.