Scalable Unbalanced Optimal Transport using Generative Adversarial Networks
Authors: Karren D. Yang, Caroline Uhler
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose an algorithm for solving this problem based on stochastic alternating gradient updates, similar in practice to GANs, and perform numerical experiments demonstrating how this methodology can be applied to population modeling. and We demonstrate in practice how our methodology can be applied towards population modeling using the MNIST and USPS handwritten digits datasets, the Celeb A dataset, and a recent single-cell RNA-seq dataset from zebrafish embrogenesis. |
| Researcher Affiliation | Academia | Karren D. Yang & Caroline Uhler Laboratory for Information & Decision Systems Institute for Data, Systems and Society Massachusetts Institute of Technology Cambridge, MA, USA {karren, cuhler}@mit.edu |
| Pseudocode | Yes | Algorithm 1 Generative-Adversarial Framework for Unbalanced Monge OT |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating the release of source code for the described methodology. |
| Open Datasets | Yes | We demonstrate in practice how our methodology can be applied towards population modeling using the MNIST and USPS handwritten digits datasets, the Celeb A dataset, and a recent single-cell RNA-seq dataset from zebrafish embrogenesis. |
| Dataset Splits | No | The paper mentions datasets used but does not provide specific details on train/validation/test splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not specify the exact hardware used for experiments, such as GPU/CPU models, memory, or specific cloud instance types. |
| Software Dependencies | No | The paper mentions using fully-connected feedforward networks with ReLU activations, but does not provide specific version numbers for software components like programming languages, libraries (e.g., PyTorch, TensorFlow), or solvers. |
| Experiment Setup | Yes | For our experiments in Section 4, we used fully-connected feedforward networks with 3 hidden layers and Re LU activations. For T, the output activation layer was a sigmoid function to map the final pixel brightness to the range (0, 1). For ξ, the output activation layer was a softplus function to map the scaling factor weight to the range (0, ). |