The Value of AI Guidance in Human Examination of Synthetically-Generated Faces

Authors: Aidan Boyd, Patrick Tinsley, Kevin Bowyer, Adam Czajka

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we investigate whether these human-guided synthetic face detectors can assist non-expert human operators in the task of synthetic image detection when compared to models trained without human-guidance. We conducted a large-scale experiment with more than 1,560 subjects classifying whether an image shows an authentic or syntheticallygenerated face, and annotating regions supporting their decisions. In total, 56,015 annotations across 3,780 unique face images were collected.
Researcher Affiliation Academia University of Notre Dame, Notre Dame, Indiana 46556, USA {aboyd3,ptinsley,kwb,aczajka}@nd.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes With this paper we also release the entire dataset of 56,015 annotations and reaction times, which can facilitate future research on human-AI pairing in the context of synthetic face detection1. 1https://github.com/CVRL/AI-Guidance
Open Datasets Yes In this study, we use authentic images from Flickr-Faces HQ (Karras, Laine, and Aila 2019), and synthetic face images from six different generators: Pro GAN, Style GAN, Style GAN2, Style GAN2-ADA, Style GAN3, and Star GANv2 (Karras et al. 2017; Karras, Laine, and Aila 2019; Karras et al. 2020b,a, 2021; Choi et al. 2020).
Dataset Splits No The paper uses pre-trained models and describes how images are presented to human subjects in control and experiment phases, but does not provide specific train/validation/test splits for the ML models themselves.
Hardware Specification No The paper does not provide specific details on the hardware used, such as GPU models, CPU types, or memory specifications, for running the experiments or training the models.
Software Dependencies No The paper mentions the use of 'pre-trained Dense Net-121 models' but does not specify software dependencies like programming languages, libraries, or frameworks with their version numbers.
Experiment Setup Yes The paper includes detailed experiment descriptions and we make source code and collected data (decisions, annotations, synthetic images) available for full reproducibility and for deriving other variants of the experiments. Specific details on AI cues, data acquisition strategy, quality checks, and subject instructions are provided in relevant sections.