T-GD: Transferable GAN-generated Images Detection Framework

Authors: Hyeonseong Jeon, Young Oh Bang, Junyaup Kim, Simon Woo

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we present the Transferable GAN-images Detection framework (T-GD), a robust transferable framework for an effective detection of GAN-images. 4. Experimental Results
Researcher Affiliation Academia 1Department of Artificial Intelligence, Sungkyunkwan University, Suwon, S. Korea 2Computer Science and Engineering Department, Sungkyunkwan University, Suwon, S. Korea 3Department of Applied Data Science, Sungkyunkwan University, Suwon, S. Korea.
Pseudocode Yes Algorithm 1 Self-training for L2-SP. Algorithm 2 Intra-class Cutmix algorithm.
Open Source Code Yes Our code is available here.1 https://github.com/cutz-j/T-GD
Open Datasets Yes PGGAN... used the official implementation dataset2 provided by the author, consisting of 100,000 GAN-generated fake celebrity images at a 1024 1024 resolution generated from the Celeb A-HQ dataset. For our experiment, we resized each image to a 128 128 resolution.2https://github.com/tkarras/progressive_growing_of_gans. Star GAN... used the official implementation source code and Celeb A dataset (Liu et al., 2015). Style GAN... used the official implementation dataset4 provided by the author... generated from the FFHQ (Karras et al., 2019a) dataset.4https://github.com/NVlabs/stylegan
Dataset Splits Yes Table 2. GAN-generated datasets used in our experiment, where train, validation, test, as well as transfer dataset are shown.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments.
Software Dependencies No The paper mentions software components like Efficient Net and Res Next as models, and SGD as an optimizer, but does not provide specific version numbers for any software libraries, programming languages, or development environments used for implementation.
Experiment Setup Yes For both pre-training teacher models, we use a batch size of 512, stochastic gradient descent (SGD) optimizer with a momentum 0.9, and gradual warm-up start by 4 times for 20 epochs with cosine-annealing. The initial learning rate is 0.04 and epochs are 300. Different data augmentation techniques are applied: JPEG compression (0.2 rate), Gaussian Blur (0.2), intra-class Cutmix (0.2), random horizontal flip (0.2), dropout (0.2), and stochastic depth (0.2). In the stage of transfer learning, we use a batch size of 200, SGD optimizer with a momentum 0.1, and an initial learning rate of 0.01. All augmentation rates are set to 0.5, except for dropout and stochastic depth: JEPG compression (0.5), Gaussian Blur (0.5), intra-class Cutmix (0.5), random horizontal flip (0.5), dropout (0.2), and stochastic depth (0.2). The training is completed at 1000 iterations.