Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark

Authors: Alexander Korotin, Lingxiao Li, Aude Genevay, Justin M. Solomon, Alexander Filippov, Evgeny Burnaev

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We thoroughly evaluate existing optimal transport solvers using these benchmark measures.
Researcher Affiliation Collaboration Alexander Korotin Skolkovo Institute of Science and Technology Artificial Intelligence Research Institute Moscow, Russia a.korotin@skoltech.ru Lingxiao Li Massachusetts Institute of Technology Cambridge, Massachusetts, USA lingxiao@mit.edu Aude Genevay Massachusetts Institute of Technology Cambridge, Massachusetts, USA aude.genevay@gmail.com Justin Solomon Massachusetts Institute of Technology Cambridge, Massachusetts, USA jsolomon@mit.edu Alexander Filippov Huawei Noah s Ark Lab Moscow, Russia filippov.alexander@huawei.com Evgeny Burnaev Skolkovo Institute of Science and Technology Artificial Intelligence Research Institute Moscow, Russia e.burnaev@skoltech.ru
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We implement our benchmark in Py Torch and provide the pre-trained transport maps for all the benchmark pairs. The code is publicly available at https://github.com/iamalexkorotin/Wasserstein2Benchmark
Open Datasets Yes Images. We use the aligned images of Celeb A64 faces dataset1 [22] to produce additional benchmark pairs. 1http://mmlab.ie.cuhk.edu.hk/projects/Celeb A.html
Dataset Splits No The paper describes generating benchmark pairs and evaluating them, but does not specify explicit train/validation/test dataset splits with percentages or counts for reproducibility in the traditional sense.
Hardware Specification Yes The experiments are conducted on 4 GTX 1080ti GPUs and require about 100 hours of computation (per GPU).
Software Dependencies No The paper mentions PyTorch but does not specify version numbers for PyTorch or any other key software libraries.
Experiment Setup No The paper mentions different neural network architectures like Dense ICNN, Conv ICNN, Res Net, and U-Net, but does not provide specific hyperparameter values or detailed system-level training configurations in the main text.