A New Benchmark and Model for Challenging Image Manipulation Detection
Authors: Zhenfei Zhang, Mingyang Li, Ming-Ching Chang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the CIMD benchmark show that our model significantly outperforms So TA IMD methods on CIMD. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University at Albany, State University of New York, New York, USA, 12222 2Department of Bioengineering, Mc Gill University, Montreal, QC, Canada, H3A 0E9 |
| Pseudocode | No | The paper describes the method using text and mathematical equations, but does not include a formal pseudocode block or algorithm listing. |
| Open Source Code | No | The dataset is available at: https://github.com/Zhenfei Z/CIMD. The paper states the dataset is available at this link, but does not explicitly state that the source code for the methodology is also provided there or elsewhere. |
| Open Datasets | Yes | To investigate the State-of-The-Art (So TA) IMD methods in those challenging conditions, we introduce a new Challenging Image Manipulation Detection (CIMD) benchmark dataset, which consists of two subsets... The dataset is available at: https://github.com/Zhenfei Z/CIMD. |
| Dataset Splits | No | The training datasets used in this study were adopted from (Kwon et al. 2022). The testing phase entailed the utilization of CIMD-R and CIMD-C to evaluate the efficacy of image-editing-based and compression-based methods, respectively. However, specific training/validation/test splits for these datasets are not provided. |
| Hardware Specification | Yes | Our model was implemented using Py Torch (Paszke et al. 2019) and trained on 8 RTX 2080 GPUs, with batch size 4. |
| Software Dependencies | Yes | Our model was implemented using Py Torch (Paszke et al. 2019) |
| Experiment Setup | Yes | Our model was implemented using Py Torch (Paszke et al. 2019) and trained on 8 RTX 2080 GPUs, with batch size 4. We set the initial learning rate as 0.001 with exponential decay. The training process consists of 250 epochs. |