IID-GAN: an IID Sampling Perspective for Regularizing Mode Collapse
Authors: Yang Li, Liangliang Shi, Junchi Yan
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are conducted on a single Ge Force RTX 3090. Synthetic data results are performed on Ge Force RTX 2080Ti. |
| Researcher Affiliation | Academia | Yang Li, Liangliang Shi and Junchi Yan Department of Computer Science and Engineering, Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University {yanglily, shiliangliang, yanjunchi}@sjtu.edu.cn |
| Pseudocode | No | The paper describes the proposed approach in detail, including equations and a loss function, but does not provide any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | The experimented image datasets include MNIST [Le Cun et al., 1998], Stacked MNIST [Metz et al., 2017], CIFAR10 [Krizhevsky et al., 2009], STL-10 [Coates et al., 2011], LSUN [Yu et al., 2015] and CELEBA [Liu et al., 2015]. |
| Dataset Splits | No | The paper mentions using training and testing for different datasets but does not explicitly provide specific percentages, counts, or a detailed methodology for creating validation splits for reproducibility. |
| Hardware Specification | Yes | Experiments are conducted on a single Ge Force RTX 3090. Synthetic data results are performed on Ge Force RTX 2080Ti. |
| Software Dependencies | No | The paper describes the model architecture and training parameters, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We set the loss weights as (λre, λGau) = (3, 10). Training batch size is set as 100 and we conduct 300 epochs for training. ... We set the loss weights as (λre, λGau) = (0.5, 0.1). ... The weights are set as (λre, λGau) = (3, 3). Latent space dimension is set as 100. ... All models are trained for 100K steps (mini-batches). We set (λre, λGau) = (0.01, 0.1). Latent space dimension is set as 128 and training batch size is set as 128. |