FrePGAN: Robust Deepfake Detection Using Frequency-Level Perturbations

Authors: Yonghyun Jeong, Doyeon Kim, Youngmin Ro, Jongwon Choi1060-1068

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories. Numerous experiments validate the state-of-the-art performance of our deepfake detector. To validate the performance of our model, we conduct numerous experiments using multiple deepfake datasets. Ablation Study To validate the effectiveness of the components in the proposed framework, we also test several variants.
Researcher Affiliation Collaboration Yonghyun Jeong1,2*, Doyeon Kim1, Youngmin Ro1,3*, Jongwon Choi4 1Samsung SDS, Seoul, Korea 2Clova, NAVER, Seoul, Korea 3Department of Artificial Intelligence, University of Seoul, Seoul, Korea 4Department of Advanced Imaging, Chung-Ang University, Seoul, Korea
Pseudocode Yes Algorithm 1: Training the deepfake detection model
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for their methodology.
Open Datasets Yes We conduct experiments based on the same trainset and testset of the experimental data of Wang et al. (Wang et al. 2020). The trainset contains 20 objects of Progan (Karras et al. 2018). The testset consists of FFHQ (Karras, Laine, and Aila 2019) and LSUN (Yu et al. 2015)... and employs Imagenet (Russakovsky et al. 2015)... Also, we use Celeb A (Liu et al. 2015)... and COCO (Lin et al. 2014)... Lastly, we utilize Deepfake dataset (Rossler et al. 2019)...
Dataset Splits No The paper mentions using train and test sets but does not specify validation splits, percentages, or a dedicated validation dataset used for hyperparameter tuning or early stopping.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., GPU models, CPU types, memory).
Software Dependencies No The paper mentions various models (VGG, DCGAN, ResNet) and optimizers (Adam) used, but does not specify the version numbers of any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes We use Adam (Kingma and Ba 2014) to train the perturbation map generator and the perturbation discriminator with the learning rate of 10 4 and 10 1, respectively. Also, the deepfake classifier is trained by Adam (Kingma and Ba 2014) with the learning rate of 10 4. The batch size of the optimizer is always set to 16, and the input image size is resized to 256 × 256 when the image sizes vary. The number of epochs is set to 20.