Towards Robust Gan-Generated Image Detection: A Multi-View Completion Representation
Authors: Chi Liu, Tianqing Zhu, Sheng Shen, Wanlei Zhou
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated the generalization ability of our framework across six popular GANs at different resolutions and its robustness against a broad range of perturbation attacks. The results confirm our method s improved effectiveness, generalization, and robustness over various baselines. |
| Researcher Affiliation | Academia | 1School of Computer Science, University of Technology Sydney, Australia 2School of Electrical and Information Engineering, The University of Sydney, Australia 3City University of Macau, Macao SAR, China |
| Pseudocode | No | The paper describes the proposed framework and its components but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We choose the large-scale facial image dataset Celeb A [Liu et al., 2015] and its high-quality version Celeb A-HQ [Karras et al., 2018] to perform evaluations at different resolutions. ... All the four GANs are pretrained with Celeb A. In the high-resolution setting, we adopt the dataset released by [He et al., 2021] 2, which includes images generated by Pro GAN, Style GAN, and Style GAN2. Note that the Pro GAN and Style GAN are pre-trained with Celeb A-HQ, while the Style GAN2 with another facial image dataset FFHQ [Karras et al., 2019]. (Footnote 2 points to: https://github.com/SSAW14/Beyondthe Spectrum) |
| Dataset Splits | No | The paper's Table 1 details 'Training' and 'Test' data sizes but does not explicitly mention a 'validation' set or its specific split percentage/counts for reproduction. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions software components like U-Net, Xception, and Adam optimizer, but does not provide specific version numbers for these or other relevant libraries/frameworks (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | We train the whole framework with a batch size of 80 using the Adam optimizer [Kingma and Ba, 2015]. The initial learning rate is 1e-3, and we reduce it to half after every ten epochs. τ in Eq. 8, and λ in Eq. 2 are empirically set to 4 and 10, respectively. We also use random Gaussian noise, color jitter, and blurring for data augmentation on the restorer side. |