Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark
Authors: Alexander Korotin, Lingxiao Li, Aude Genevay, Justin M. Solomon, Alexander Filippov, Evgeny Burnaev
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We thoroughly evaluate existing optimal transport solvers using these benchmark measures. |
| Researcher Affiliation | Collaboration | Alexander Korotin Skolkovo Institute of Science and Technology Artificial Intelligence Research Institute Moscow, Russia EMAIL Lingxiao Li Massachusetts Institute of Technology Cambridge, Massachusetts, USA EMAIL Aude Genevay Massachusetts Institute of Technology Cambridge, Massachusetts, USA EMAIL Justin Solomon Massachusetts Institute of Technology Cambridge, Massachusetts, USA EMAIL Alexander Filippov Huawei Noah s Ark Lab Moscow, Russia EMAIL Evgeny Burnaev Skolkovo Institute of Science and Technology Artificial Intelligence Research Institute Moscow, Russia EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We implement our benchmark in Py Torch and provide the pre-trained transport maps for all the benchmark pairs. The code is publicly available at https://github.com/iamalexkorotin/Wasserstein2Benchmark |
| Open Datasets | Yes | Images. We use the aligned images of Celeb A64 faces dataset1 [22] to produce additional benchmark pairs. 1http://mmlab.ie.cuhk.edu.hk/projects/Celeb A.html |
| Dataset Splits | No | The paper describes generating benchmark pairs and evaluating them, but does not specify explicit train/validation/test dataset splits with percentages or counts for reproducibility in the traditional sense. |
| Hardware Specification | Yes | The experiments are conducted on 4 GTX 1080ti GPUs and require about 100 hours of computation (per GPU). |
| Software Dependencies | No | The paper mentions PyTorch but does not specify version numbers for PyTorch or any other key software libraries. |
| Experiment Setup | No | The paper mentions different neural network architectures like Dense ICNN, Conv ICNN, Res Net, and U-Net, but does not provide specific hyperparameter values or detailed system-level training configurations in the main text. |