Initiative Defense against Facial Manipulation

Authors: Qidong Huang, Jie Zhang, Wenbo Zhou, Weiming Zhang, Nenghai Yu1619-1627

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness and robustness of our framework in different settings. Finally, we hope this work can shed some light on initiative countermeasures against more adversarial scenarios.
Researcher Affiliation Academia Qidong Huang , Jie Zhang , Wenbo Zhou , Weiming Zhang , Nenghai Yu University of Science and Technology in China {hqd0037@mail., welbeckz@, zjzac@mail., zhangwm@, ynh@}ustc.edu.cn
Pseudocode Yes Algorithm 1: Two-stage Training Framework
Open Source Code No The paper does not provide any statement or link indicating the release of its source code.
Open Datasets Yes For facial attribute editing, we use face images from the Celeb A dataset (Liu et al. 2015), which is further split into 100000 images for two-stage training, 100000 for training the target model and 2600 for inference stage.
Dataset Splits No The paper mentions splitting data for "two-stage training" and "training the target model" and "inference stage" but does not specify a distinct "validation" split with percentages or counts.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes In the first scenario, the learning rate of both PG and the surrogate model are assigned as 0.0001 by default, and five randomly selected attribute domains are involved in the twostage training framework. Empirically, we adopt λ = 0.01, λ1, λ2 = 10, λ3 = 2.5, and λ4 = 1 in the related loss function. For the second scenario, the initial learning rates of all models are equal to 0.0002 and λ = 0.01 by default.