On Scalable and Efficient Computation of Large Scale Optimal Transport
Authors: Yujia Xie, Minshuo Chen, Haoming Jiang, Tuo Zhao, Hongyuan Zha
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments on both synthetic and real datasets illustrate that SPOT is robust and has favorable convergence behavior. |
| Researcher Affiliation | Academia | Yujia Xie 1 Minshuo Chen 1 Haoming Jiang 1 Tuo Zhao 1 Hongyuan Zha 1 1Georgia Tech. |
| Pseudocode | Yes | Algorithm 1 Mini-batch Primal Dual Stochastic Gradient Algorithm for SPOT |
| Open Source Code | No | The paper does not include a statement that its own source code is being released or provide a link to its repository. It only links to the code of a comparison method (Deep JDOT). |
| Open Datasets | Yes | We evaluate DASPOT with the MNIST, MNISTM, USPS (Hull, 1994), and SVHN (Netzer et al., 2011) datasets. |
| Dataset Splits | No | The paper does not specify precise dataset split percentages or sample counts for training, validation, and testing. It refers to using datasets for tasks like 'Source Domain -> Target Domain' but not the specific partitioning methodology. |
| Hardware Specification | Yes | All experiments are implemented with Py Torch using one GTX1080Ti GPU and a Linux desktop computer with 32GB memory |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with version details. |
| Experiment Setup | Yes | We adopt the Adam optimizer with configuration parameters 0.5 and 0.999 (Kingma & Ba, 2014). [...] We set da = 0 for the first 105 iteration to wait the generators to be well trained. Then we set da = 10 for the next 3 105 iteration. We take totally 4 105 iterations, and set the learning rate equal to 10 4 and batch size equal to 128 for all experiments. |