Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching
Authors: Ziliang Chen, Zhanfu Yang, Xiaoxi Wang, Xiaodan Liang, Xiaopeng Yan, Guanbin Li, Liang Lin
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate MMI-ALI in diverse challenging m-domain scenarios and verify its superiority. In this section, we propose diverse cross-m-domain experiments to evaluate our MMI-ALI in generative modeling and show the primal empirical results. |
| Researcher Affiliation | Academia | 1Sun Yat-sen University, China 2Purdue University, USA. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1http://github.com/MintYiqingchen/MMI-ALI. |
| Open Datasets | Yes | Specifically, we choose MNIST as the base domain, then rotate the images by π 2 to create two other domains. In 3-heterogeneous-domain transfer, we consider Cityscape (Cordts et al., 2016) as the base benchmark... we employ Moji Talk dataset (Zhou & Wang, 2017) that contains 64 emojis... |
| Dataset Splits | No | The paper mentions training and testing sets but does not explicitly provide details about a validation set or specific train/validation/test splits across all experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components and architectures like DCGAN, ResNet, GAN loss, and Batch Normalization, but it does not specify version numbers for any software dependencies required to replicate the experiments. |
| Experiment Setup | No | The paper describes some aspects of the experimental setup, such as the use of two-layered fully-connected nets with ReLU, DCGAN backbone, vanilla GAN loss, l1/l2-norm cycle losses, and Batch Normalization. However, it lacks concrete numerical values for key hyperparameters such as learning rate, batch size, or number of epochs, which are crucial for reproducibility. |