Towards Model Extraction Attacks in GAN-Based Image Translation via Domain Shift Mitigation

Authors: Di Mi, Yanjun Zhang, Leo Yu Zhang, Shengshan Hu, Qi Zhong, Haizhuan Yuan, Shirui Pan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on different image translation tasks, including image super-resolution and style transfer, are performed on different backbone victim models, and the new design consistently outperforms the baseline by a large margin across all metrics.
Researcher Affiliation Academia 1Xiangtan University 2University of Technology Sydney 3Griffith University 4Huazhong University of Science and Technology 5City University of Macau
Pseudocode No The paper refers to an algorithm in the supplementary material ('We summarize the process in algorithm in the Supp.-C.'), but no pseudocode or algorithm block is present in the main text.
Open Source Code No Given the nature of Sec. 6 which involves discussing real-world attacks, we do not publicly release our code of this section in the usual manner. Instead, we intend to make it accessible only to legitimate researchers as per request.
Open Datasets Yes The datasets employed to train the victim style transfer models are horse2zebra and photo2vangogh in (Zhu and et al. 2021). The datasets used for training the victim super-resolution model are DIV2K (Agustsson and Timofte 2017), Flickr2K (Timofte and et al. 2017), and Outdoor Scene Training (Wang et al. 2018b). For the horse2zebra task, we employ horse images from the Animal10 dataset (Zhu et al. 2019). For the photo2vangogh task, we utilize a subset of two thousand landscape images from the Landscape dataset (Rougetet 2020). Regarding the super-resolution task, we use the Anime dataset (Chen 2020)
Dataset Splits No The paper describes training and test sets but does not explicitly mention a validation dataset or corresponding splits.
Hardware Specification No The paper does not provide specific details about the hardware used for the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software components like Pix2Pix and Cycle GAN but does not specify their version numbers or other software dependencies with version details.
Experiment Setup No The paper mentions the use of Adam and SAM optimizers, and a coefficient alpha for regularization, but does not provide specific numerical values for these hyperparameters or other concrete training configurations like learning rate, batch size, or number of epochs in the main text.