SAN: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer

Authors: Yuhta Takida, Masaaki Imaizumi, Takashi Shibuya, Chieh-Hsin Lai, Toshimitsu Uesaka, Naoki Murata, Yuki Mitsufuji

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and image datasets support our theoretical results and the effectiveness of SAN as compared to the usual GANs.
Researcher Affiliation Collaboration 1Sony AI, 2The University of Tokyo, 3Sony Group Corporation
Pseudocode Yes Algorithm 1 Training SAN (the blue lines indicate modified steps against GAN training)
Open Source Code Yes Our implementation is available on the project page https://ytakida.github.io/san/. We further provide our source code to reproduce our results at https://github.com/sony/san.
Open Datasets Yes We train SANs and GANs with various objective functions on CIFAR10 (Krizhevsky et al., 2009) and Celeb A (128 128) (Liu et al., 2015). Style GAN-XL (Sauer et al., 2022)... We train Style SAN-XL on CIFAR10 and Image Net (256 256) (Russakovsky et al., 2015).
Dataset Splits No The paper does not explicitly provide details about training, validation, and test dataset splits with percentages, sample counts, or specific split files. It mentions using standard datasets like CIFAR10 and Celeb A, which typically have predefined splits, but the paper does not explicitly state how these splits were utilized for validation.
Hardware Specification No The paper mentions "Computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by National Institute of Advanced Industrial Science and Technology (AIST) was used." This is a general cluster name and does not provide specific hardware details such as GPU/CPU models, memory, or processor types.
Software Dependencies No The paper mentions software like "Py Torch implementation" and "Adam optimizer (Kingma & Ba, 2015)" but does not specify version numbers for these components.
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2015) with betas of (0.0, 0.9) and an initial learning rate of 0.0002 for both the generator and discriminator. We train the models for 2,000 iterations with the minibatch size of 64.