Kernel Neural Optimal Transport
Authors: Alexander Korotin, Daniil Selikhanovych, Evgeny Burnaev
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test NOT with kernel costs on the unpaired image-to-image translation task.In this section, we test our algorithm on an unpaired image-to-image translation task. |
| Researcher Affiliation | Academia | Alexander Korotin Skolkovo Institute of Science and Technology Artificial Intelligence Research Institute Moscow, Russia a.korotin@skoltech.ru Daniil Selikhanovych Skolkovo Institute of Science and Technology Moscow, Russia selikhanovychdaniil@gmail.com Evgeny Burnaev Skolkovo Institute of Science and Technology Artificial Intelligence Research Institute Moscow, Russia e.burnaev@skoltech.ru |
| Pseudocode | No | The paper references 'NOT s training procedure (Korotin et al., 2023, Algorithm 1)' but does not include a pseudocode block or algorithm within this paper itself. |
| Open Source Code | Yes | The code is written in Py Torch framework and is available at https://github.com/iamalexkorotin/Kernel Neural Optimal Transport |
| Open Datasets | Yes | Image datasets. We test the following datasets as P, Q: aligned anime faces1, celebrity faces (Liu et al., 2015), shoes (Yu & Grauman, 2014), Amazon handbags, churches from LSUN dataset (Yu et al., 2015), outdoor images from the MIT places database (Zhou et al., 2014), describable textures (Cimpoi et al., 2014). |
| Dataset Splits | No | We pick 90% of each dataset for unpaired training. The rest 10% are considered as the test set. The paper explicitly mentions train and test splits but no dedicated validation split. |
| Hardware Specification | Yes | NOT with kernel costs for 128 128 images converges in 3-4 days on a 4 Tesla V100 GPUs (16 GB). |
| Software Dependencies | No | The paper mentions 'Py Torch framework' and 'Adam optimizer' but does not specify their version numbers or any other software dependencies with specific versions. |
| Experiment Setup | Yes | The learning rate is lr = 1 10 4. We use the Multi Step LR scheduler which decreases lr by 2 after [15k, 25k, 40k, 55k, 70k] (iterations of fω). The batch size is |X| = 64, |Zx| = 4. The number of inner iterations is k T = 10. |