Wasserstein-2 Generative Networks

Authors: Alexander Korotin, Vage Egiazarian, Arip Asadulaev, Alexander Safin, Evgeny Burnaev

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental From the practical side, we evaluate our algorithm on a wide range of tasks: image-to-image color transfer, latent space optimal transport, image-to-image style transfer, and domain adaptation. In this section, we experimentally evaluate the proposed model. In Subsection 5.1, we apply our method to estimate optimal transport maps in the Gaussian setting. In Subsection 5.2, we consider latent space mass transport. In Subsection 5.3, we experiment with image-to-image style translation.
Researcher Affiliation Academia Alexander Korotin Skolkovo Institute of Science and Technology Moscow, Russia a.korotin@skoltech.ru; Vage Egiazarian Skolkovo Institute of Science and Technology Moscow, Russia vage.egiazarian@skoltech.ru; Arip Asadulaev ITMO University Saint Petersburg, Russia aripasadulaev@itmo.ru; Aleksandr Safin Skolkovo Institute of Science and Technology Moscow, Russia aleksandr.safin@skoltech.ru; Evgeny Burnaev Skolkovo Institute of Science and Technology Moscow, Russia e.burnaev@skoltech.ru
Pseudocode Yes Algorithm 1: Numerical Procedure for Optimizing Regularized Correlations (12)
Open Source Code Yes The code is written on Py Torch framework and is publicly available at https://github.com/iamalexkorotin/Wasserstein2Generative Networks.
Open Datasets Yes We test our algorithm on Celeb A image generation (64 64). We test our model on MNIST ( 60000 images; 28 28) and USPS ( 10000 images; rescaled to 28 26) digits datasets. We experiment with Conv ICNN potentials on publicly availaible Winter2Summer and Photo2Cezanne datasets containing 256 256 pixel images.
Dataset Splits No No explicit information on training/validation/test splits (e.g., percentages, sample counts, or clear predefined split references for validation) was found in the text for general dataset usage.
Hardware Specification Yes The networks are trained on a single GTX 1080Ti.
Software Dependencies No The code is written on Py Torch framework.
Experiment Setup Yes For each particular problem the networks are trained for 30000 iterations with 1024 samples in a mini batch. Adam optimizer Kingma & Ba (2014) with lr = 10 3 is used. We put λ = 1 in our cycle regularization and impose additional 10 10 L1 regularization on the weights. Adam optimizer with lr = 3 10 4 is used. We put λ = 100 as the cycle regularization parameter.