Self-Supervised Image Local Forgery Detection by JPEG Compression Trace

Authors: Xiuli Bi, Wuqing Yan, Bo Liu, Bin Xiao, Weisheng Li, Xinbo Gao

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the proposed method can detect image local forgery on different datasets without re-training, and keep stable performance over various types of image local forgery. Extensive experiments show that the proposed method has a good ability to detect various local forgeries in JPEG images and can resist cropping attacks well. Experiments Dataset and Metric Experimental Dataset. Ablation Study. Comparison with the State-of-the-Art.
Researcher Affiliation Academia Xiuli Bi, Wuqing Yan, Bo Liu, Bin Xiao*, Weisheng Li, Xinbo Gao Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China {bixl, boliu, xiaobin, liws, gaoxb}@cqupt.edu.cn, s210201122@stu.cqupt.edu.cn
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It describes the methods in narrative form and with figures.
Open Source Code No The paper does not provide any explicit statements about making its source code available, nor does it include links to a code repository.
Open Datasets Yes We only used 200 TIFF-formated images randomly selected from the ALASKA (Ruiz et al. 2021) dataset.
Dataset Splits No The paper mentions cropping images into patches for training ("we cropped 200 TIFF images into 12800 48x48 patches as the training set") but does not specify a separate validation split or how it was handled.
Hardware Specification Yes Our proposed method was implemented by Tensorflow and trained on NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions "Tensorflow" as the implementation framework but does not provide specific version numbers for it or any other key software dependencies.
Experiment Setup Yes The batch size was set to 128, and the patches within a batch are randomly compressed with a quality factor QF [50, 100]. The Adam optimizer was used with the learning rate of 0.001, and the λ in Eq. 8 was set to 0.1. In self-supervised training, each batch, sized 200, is divided into 50 groups (N=50), and each group is compressed with a different quality factor in [50, 100]. The Adam optimizer was used with a learning rate of 0.0001.