Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation

Authors: Yogesh Balaji, Rama Chellappa, Soheil Feizi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our formulation in two applications of GANs and domain adaptation. Our approach can train state-of-the-art GAN models on noisy datasets corrupted with outlier distributions. In particular, the proposed optimization method computes weights for training samples reflecting how difficult it is for those samples to be generated in the model. In domain adaptation, our robust OT formulation leads to improved accuracy compared to the standard adversarial adaptation methods. Our code is available at https://github.com/yogeshbalaji/robust OT.
Researcher Affiliation Academia Yogesh Balaji Department of Computer Science University of Maryland College Park, MD yogesh@umd.edu Rama Chellappa Electrical and Computer Engineering and Biomedical Engineering Departments Johns Hopkins University Baltimore, MD rchella4@jhu.edu Soheil Feizi Department of Computer Science University of Maryland College Park, MD sfeizi@cs.umd.edu
Pseudocode Yes A detailed algorithm can be found in Appendix.
Open Source Code Yes Our code is available at https://github.com/yogeshbalaji/robust OT.
Open Datasets Yes We artificially add outlier samples to the CIFAR-10 dataset such they occupy γ fraction of the samples. MNIST and uniform noise are used as two choices of outlier distributions.
Dataset Splits No The paper states 'performance on a validation set can be used for choosing ρ' but does not provide specific details on the dataset splits (e.g., percentages, sample counts, or explicit methodology for splitting).
Hardware Specification No The paper mentions 'small batch sizes ( 28) were used due to GPU limitations' but does not specify any particular GPU models, CPU types, or detailed hardware specifications used for running experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes We set λ to a large value (typically λ = 1000) to enforce the constraint on χ2-divergence.