MAGIC: Mask-Guided Image Synthesis by Inverting a Quasi-robust Classifier

Authors: Mozhdeh Rouhsedaghat, Masoud Monajatipoor, C.-C. Jay Kuo, Iacopo Masi

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experimental Evaluation In this section, we investigate MAGIC s capabilities and the effect of the proposed components on synthesized images. We offer an ablation study illustrating the effect of the contributions on our baseline IMAGINE and analyze the improvements. We further compare MAGIC with state-of-the-art by performing qualitative and quantitative evaluations. Quantitative Evaluation. We use machine perception as a proxy for measuring the quality by employing Frechet Inception Distance (FID) by Heusel et al. (2017) and Single Image FID by Shaham, Dekel, and Michaeli (2019). As shown in Tab. 1, MAGIC significantly outperformed DEEPSIM on both object and scene synthesis. To further evaluate our method, we used human perception by conducting subjective evaluation of the image quality for images synthesized by MAGIC compared to DEEPSIM.
Researcher Affiliation Academia Mozhdeh Rouhsedaghat1, Masoud Monajatipoor2, C.-C. Jay Kuo1, Iacopo Masi3 1 University of Southern California (USC) 2 University of California, Los Angeles (UCLA) 3 Sapienza, University of Rome
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes *Code: https://github.com/mozhdehrouhsedaghat/magic
Open Datasets Yes Thereby, we replace θ with a quasi-robust model trained on Image Net with Eq. 3 with a ℓ2 perturbation ball centered on the input with a very small ϵ = 0.05.
Dataset Splits Yes We evaluate MAGIC by conducting extensive experiments on images either randomly selected from the Image Net validation set or collected from the web, or the same images that previous methods used.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory used for running its experiments.
Software Dependencies No The paper mentions the Adam optimizer and refers to an implementation from another paper, but does not provide specific software names with version numbers for reproducibility.
Experiment Setup Yes For optimizing x , initially the hyper-parameters h in Eq. 4 are set as follows: η = 0.0, γ = 30.0, κ = 1.0, ν = 5.0 while the parameters in ρ(x ) are α = 1e 4 and β = 1e 5. After 5,000 iterations, we start training θd with η = 0.05. This technique improves the alignment of the generated image with y and makes the training process more stable. We use the Adam optimizer with learning rate λ of 5e 4.