Discriminator optimal transport

Authors: Akinori Tanaka

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on some experiments and a bit of OT theory, we propose discriminator optimal transport (DOT) scheme to improve generated images. We show that it improves inception score and FID calculated by unconditional GAN trained by CIFAR-10, STL-10 and a public pre-trained model of conditional GAN trained by Image Net.
Researcher Affiliation Academia Akinori Tanaka Mathematical Science Team, RIKEN Center for Advanced Intelligence Project (AIP) 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan Interdisciplinary Theoretical and Mathematical Sciences Program (i THEMS), RIKEN 2-1 Hirosawa, Wako, Saitama 351-0198, Japan Department of Mathematics, Faculty of Science and Technology, Keio University 3-14-1 Hiyoshi, Kouhoku-ku, Yokohama 223-8522, Japan akinori.tanaka@riken.jp
Pseudocode Yes Algorithm 1 Target space optimal transport by gradient descent, Algorithm 2 Latent space optimal transport by gradient descent, Algorithm 3 Latent space conditional optimal transport by gradient descent
Open Source Code Yes One can download our codes from https://github.com/AkinoriTanaka-phys/DOT.
Open Datasets Yes trained by CIFAR-10, STL-10 and a public pre-trained model of conditional GAN trained by Image Net. [...] To calculate FID, we use available 798,900 image files in ILSVRC2012 dataset.
Dataset Splits No No specific dataset split information (percentages, sample counts, or explicit methodology for splitting) found for training, validation, or test sets. The paper mentions using standard datasets like CIFAR-10 and ImageNet, which typically have predefined splits, but it does not explicitly state the splits used in this paper.
Hardware Specification No The paper mentions support from 'RIKEN AIP deep learning environment (RAIDEN) and RIKEN i THEMS' but does not specify any particular hardware components like GPU models, CPU types, or memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8'). While it mentions optimizers like 'Adam' and 'SGD', these are algorithms, not specific software versions.
Experiment Setup Yes We apply gradient descent updates with with Adam(α, β1, β2) = (0.01, 0, 0.9). [...] ϵ = 0.01 SGD is applied 20 times for CIDAR-10 and 10 times for STL-10. keff is calculated by 100 samples and δ = 0.001.