Discrepancy-Guided Reconstruction Learning for Image Forgery Detection

Authors: Zenan Shi, Haipeng Chen, Long Chen, Dong Zhang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on four challenging datasets validate the effectiveness of our proposed method against state-of-the-art competitors. and Extensive experiments are carried out on four commonly used yet challenging face forgery detection datasets. Results validate that our Dis GRL can achieve state-of-the-art performance on both seen and unseen forgeries.
Researcher Affiliation Academia Zenan Shi1,2 , Haipeng Chen1,2 , Long Chen3 , Dong Zhang3, 1College of Computer Science and Technology, Jilin University 2Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University 3Department of CSE, The Hong Kong University of Science and Technology {shizn, chenhp}@jlu.edu.cn, {longchen, dongz}@ust.hk
Pseudocode No The paper describes its methods in prose and provides architectural diagrams (Figure 1, 2, 3, 4) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes To facilitate a fair result comparison with stateof-the-art methods, we conducted experiments on four fundamental yet challenging face forgery datasets, including Face Forensics++ (FF++) [R ossler et al., 2019], Celeb-DF [Li et al., 2020b], WLD [Zi et al., 2020], and DFDC [Dolhansky et al., 2019].
Dataset Splits No The paper mentions training and testing on datasets, and "Intra-Dataset Evaluation" implies standard splits are used, but it does not explicitly state the train/validation/test dataset split percentages or sample counts for reproduction. It only mentions "In the training phase, the batch size is set to 32".
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used to run the experiments.
Software Dependencies No We implemented our model on the PyTorch framework. The paper mentions PyTorch but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes The input face images are resized into 299 299 and augmented by random horizontal flipping. In the training phase, the batch size is set to 32, and Adam optimizer [Kingma and Ba, 2015] with learning rate 1e-4, and weight decay 1e-5 are adopted to optimize the model. The step learning rate strategy with a gamma of 0.5 is utilized to adjust the learning rate. Following [Cao et al., 2022], λ1, λ2, and λ3 in Eq. (12) are empirically set to 0.1.