An Adversarial Framework for Generating Unseen Images by Activation Maximization

Authors: Yang Zhang, Wang Zhou, Gaoyuan Zhang, David Cox, Shiyu Chang3371-3379

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we will present two sets of results. The first set demonstrates the quality of the generated images and how much information PROBEGAN can recover from the classifiers. The second set of results shows how PROBEGAN can be used to interpret image classifiers. Configurations Datasets To evaluate the performance of our approach, we conduct experiments on two image datasets, CIFAR-10 (Krizhevsky, Nair, and Hinton 2009) and Waterbird dataset (Sagawa et al. 2019), and an audio dataset (Gemmeke et al. 2017; Veaux, Yamagishi, and Mac Donald 2016). We randomly select one class as the unseen class. Baselines The following two baselines are implemented. BIGGAN-AM (Li et al. 2020): synthesizing images from a classifier by using a pre-trained GAN network from Image Net as a strong prior and searching for embeddings that can be mapped to the target class. NAIVE: PROBEGAN without the conditional discriminator and class dilution. There is only the unconditional discriminator distinguishing fake images of target class from real images of seen classes. It is expected to suffer from the challenges of discriminator focusing on class differences instead of naturalness. Each algorithm will be trained with a regular classifier and a robust classifier. Evaluation metrics Class-wise Fr echet Inception Distance (FID), i.e., the intra-FID score (Miyato and Koyama 2018), is used for quantitative evaluation on image classifiers. FID score calculates the Wasserstein-2 distance of the feature vectors of an Inception-v3 network between the generated and real images, and the lower FID score indicates the more similar the two image sets are. Sample images are included to qualitatively illustrate the performance. In addition, we employ Amazon Mechanical Turk (MTurk) to categorize the generated samples, and report the percentage of correctly recognized samples of the new class. Higher recognition rates indicate better resemblance to the target class.
Researcher Affiliation Collaboration Yang Zhang*1, Wang Zhou 2 , Gaoyuan Zhang1, David Cox1, Shiyu Chang3 1MIT-IBM Watson AI Lab, Cambridge, MA, USA 2Meta AI, New York, NY, USA 3Unversity of California at Santa Barbara, USA {yang.zhang2, gaoyuan.zhang, david.d.cox}@ibm.com, wangzhou@fb.com, chang87@ucsb.edu
Pseudocode No The paper does not contain a pseudocode block or a clearly labeled algorithm block. Figure 1 illustrates the PROBEGAN framework with data flow diagrams, but not pseudocode.
Open Source Code Yes Our code is at https://github.com/csmiler/Probe GAN/.
Open Datasets Yes To evaluate the performance of our approach, we conduct experiments on two image datasets, CIFAR-10 (Krizhevsky, Nair, and Hinton 2009) and Waterbird dataset (Sagawa et al. 2019), and an audio dataset (Gemmeke et al. 2017; Veaux, Yamagishi, and Mac Donald 2016). We randomly select one class as the unseen class.
Dataset Splits No The paper refers to using datasets like CIFAR-10 but does not specify the exact percentages or counts for training, validation, or test splits. It implicitly uses training data for GANs and evaluates on test data, but the explicit split breakdown is absent.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory specifications) used for running its experiments.
Software Dependencies No The paper mentions 'Pytorch implementation' but does not specify its version number or any other software dependencies with their corresponding versions.
Experiment Setup No The paper describes aspects of the model architecture and training strategy (e.g., parameter sharing between discriminators, class information handling for NAIVE) but does not provide specific hyperparameters such as learning rate, batch size, or optimizer settings in the main text. It mentions 'More experiment details can be found in the appendix', but these details are not present in the provided paper text.