Kernel Mean Matching for Content Addressability of GANs

Authors: Wittawat Jitkrittum, Patsorn Sangkloy, Muhammad Waleed Gondal, Amit Raj, James Hays, Bernhard Schölkopf

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various high-dimensional image generation problems (Celeb A-HQ, LSUN bedroom, bridge, tower) show that our approach is able to generate images which are consistent with the input set, while retaining the image quality of the original model.
Researcher Affiliation Academia 1Empirical Inference Department, Max Planck Institute for Intelligent Systems, Germany 2School of Interactive Computing, Georgia Institute of Technology, USA.
Pseudocode No The paper describes procedures and optimization problems but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Python code is available at https://github.com/wittawatj/cadgan.
Open Datasets Yes We consider three categories of the LSUN dataset (Yu et al., 2015): bedroom, bridge, tower, and use pretrained GAN models from Mescheder et al. (2018) which were trained separately on training samples from each category. To show the importance of a nonlinear kernel k in (6), we consider a DCGAN (Radford et al., 2015) model trained on MNIST.
Dataset Splits No The paper mentions using well-known datasets like MNIST and LSUN but does not explicitly provide specific percentages, counts, or references to predefined train/validation/test splits within the text for reproducibility of data partitioning.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory amounts) for running its experiments.
Software Dependencies No The paper mentions 'Pytorch code' and links to GitHub repositories for DCGAN and CNN classifier implementations (e.g., 'https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/dcgan/dcgan.py'), implying PyTorch is used, but it does not specify concrete version numbers for any software dependencies.
Experiment Setup Yes We compare two different kernels (k in (6)): 1) linear kernel, and 2) the IMQ kernel with kernel parameter c set to 10. For content-based generation, we use the IMQ kernel with parameter c = 100 and set the extractor E to be the output of the layer before the last fully connected layer of a pretrained Places365-Res Net classification model (Zhou et al., 2017). To solve (5), we use Adam (Kingma and Ba, 2015) which relies on the gradient