Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
StegFormer: Rebuilding the Glory of Autoencoder-Based Steganography
Authors: Xiao Ke, Huanqi Wu, Wenzhong Guo
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our Steg Former outperforms existing state-of-the-art (SOTA) models. |
| Researcher Affiliation | Academia | 1Fujian Provincial Key Laboratory of Networking Computing and Intelligent Information Processing, College of Computer and Data Science, Fuzhou University, Fuzhou 350116, China 2Engineering Research Center of Big Data Intelligence, Ministry of Education, Fuzhou 350116, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code will be released in https://github.com/aoli-gei/StegFormer. |
| Open Datasets | Yes | We use DIV2K to train our Steg Former and the testing datasets comprise DIV2K (Agustsson and Timofte 2017), COCO (Lin et al. 2014), and Image Net (Deng et al. 2009) to test the generalization ability. |
| Dataset Splits | No | The paper mentions using DIV2K for training and COCO/ImageNet for testing, but it does not specify explicit training/validation/test splits within these datasets or for the experiment setup. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'The Adam W optimizer' but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | The Adam W optimizer is used to train Steg Former with the cosine decay strategy to decrease the learning rate to 1e-6 with the initial learning rate 1e-3. |