MIXGAN: Learning Concepts from Different Domains for Mixture Generation

Authors: Guang-Yuan Hao, Hong-Xing Yu, Wei-Shi Zheng

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate the effectiveness of MIXGAN as compared to related state-of-the-art GAN-based models. We evaluate our MIXGAN on several tasks. The experimental results show that our model can learn to generate images in a new domain, e.g., generating hand-written colorful digits provided that our model only observes black-and-white hand-written digits [Le Cun et al., 2010] and colorful type-script ones [Netzer et al., 2011]. We show experimental results on three aspects.
Researcher Affiliation Academia Guang-Yuan Hao1, Hong-Xing Yu1,4, Wei-Shi Zheng1,2,3 1 School of Data and Computer Science, Sun Yat-sen University, China 2 Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China 3 Collaborative Innovation Center of High Performance Computing, NUDT, China 4 Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, China
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statements about code release or links to a code repository for the described methodology.
Open Datasets Yes We take hand-written digits from the MNIST dataset [Le Cun et al., 2010] and typescript digits from the SVHN dataset [Netzer et al., 2011].
Dataset Splits No The paper mentions using MNIST and SVHN datasets but does not specify the training, validation, or test split percentages or exact counts used for their experiments.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions using the "Adam [Kingma and Ba, 2014] solver" but does not provide specific version numbers for Adam or any other software libraries or dependencies.
Experiment Setup Yes During our training, we use Adam [Kingma and Ba, 2014] solver with learning rate 0.0002 and β1 = 0.5, β2 = 0.999. It reaches convergence typically within 100 epoches. Then, as Gc can already reproduce the learned concept, we optimize Eq. (4) to train the mixture generator G, also in an iterative adversarial learning pattern. We use the same Adam solver and training typically converges within 300 epoches.