MisGAN: Learning from Incomplete Data with Generative Adversarial Networks
Authors: Steven Cheng-Xian Li, Bo Jiang, Benjamin Marlin
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption.1 |
| Researcher Affiliation | Academia | Steven Cheng-Xian Li University of Massachusetts Amherst cxl@cs.umass.edu Bo Jiang Shanghai Jiao Tong University bjiang@sjtu.edu.cn Benjamin M. Marlin University of Massachusetts Amherst marlin@cs.umass.edu |
| Pseudocode | No | The paper describes algorithms and formulations in text and mathematical equations, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our implementation is available at https://github.com/steveli/misgan |
| Open Datasets | Yes | Data We evaluate Mis GAN on three datasets: MNIST, CIFAR-10 and Celeb A. MNIST is a dataset of handwritten digits images of size 28 28 (Le Cun et al., 1998). ... CIFAR-10 is a dataset of 32 32 color images from 10 classes (Krizhevsky, 2009). ... Celeb A is a large-scale face attributes dataset (Liu et al., 2015)... |
| Dataset Splits | No | The paper mentions using training examples (e.g., '60,000 training examples for the experiments' for MNIST) and evaluating imputation on 'incomplete examples in the training set', but it does not specify explicit training, validation, and test splits for the purpose of model development and evaluation in the standard sense (e.g., 80/10/10 split). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or cloud computing instance specifications. |
| Software Dependencies | No | The paper mentions using DCGAN architecture, Wasserstein GAN with gradient penalty, and U-Net architecture, and adapting code from GAIN, but it does not specify version numbers for any software libraries or dependencies (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For all the datasets, Mis GAN is trained for 300 epochs. We train Mis GAN imputer for 1000 epochs for MNIST and CIFAR-10 as the networks are smaller and 600 epochs for Celeb A. |