Deepfake Network Architecture Attribution
Authors: Tianyun Yang, Ziyao Huang, Juan Cao, Lei Li, Xirong Li4662-4670
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on multiple cross-test setups and a large-scale dataset demonstrate the effectiveness of DNA-Det. |
| Researcher Affiliation | Academia | 1 Key Lab of Intelligent Information Processing, Institute of Computing Technology, CAS, Beijing, China 2 University of Chinese Academy of Sciences, Beijing, China 3 Key Lab of Data Engineering and Knowledge Engineering, Renmin University of China |
| Pseudocode | No | The paper describes the proposed method using text and figures (Figure 4, Figure 6), but it does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about making its source code publicly available, nor does it provide any links to a code repository. |
| Open Datasets | Yes | all of which are trained on celeb A dataset (Liu et al. 2015). We apply these transformations on a natural image dataset containing LSUN (Yu et al. 2015) and Celeb A. |
| Dataset Splits | No | The paper describes various 'cross-test setups' (cross-seed, cross-loss, cross-finetune, cross-dataset) and mentions 'train-set' in Table 1 for defining experiment conditions. However, it does not provide specific details about standard training, validation, and test splits (e.g., percentages or exact counts) for reproducibility. |
| Hardware Specification | No | The paper describes the network architecture (e.g., shallow 8-layer CNN) and image processing steps, but it does not specify any hardware details such as GPU models, CPU types, or memory used for conducting the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' but does not specify any software dependencies (e.g., libraries, frameworks) with their version numbers. |
| Experiment Setup | Yes | For optimization, we choose Adam optimizer. For the celeb A experiment in section , the initial learning rate is set to 10^-4 and is multiplied by 0.9 for every 500 iterations. For the LSUN-bedroom experiment in section and the experiment in section , the initial learning rate is set to 10^-3 and is multiplied by 0.9 for every 2500 iterations. The batch size is 32 #classes in Section and 16 #classes in Section with a class balance strategy. |