IB-GAN: Disentangled Representation Learning with Information Bottleneck Generative Adversarial Networks

Authors: Insu Jeon, Wonkwang Lee, Myeongjang Pyeon, Gunhee Kim7926-7934

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental With the experiments on d Sprites and Color-d Sprites dataset, we demonstrate that IB-GAN achieves competitive disentanglement scores to those of state-of-the-art β-VAEs and outperforms Info GAN.
Researcher Affiliation Academia 1 Dept. of Computer Science and Engineering, Seoul National University, Republic of Korea (South) 2 School of Computing, Korea Advanced Institute of Science and Technology, Republic of Korea (South)
Pseudocode Yes Algorithm 1 IB-GAN training algorithm
Open Source Code No The paper does not provide an explicit statement or a link to the open-source code for the described methodology.
Open Datasets Yes For quantitative evaluation, we measure the disentanglement metrics proposed in (Kim and Mnih 2018) on d Sprites (Higgins et al. 2017a) and Color-d Sprites (Burgess et al. 2018; Locatello et al. 2019) dataset. For qualitative evaluation, we visualize latent traversal results of IB-GAN and measure FID scores (Szegedy et al. 2015) on Celeb A (Liu et al. 2015) and 3D Chairs (Aubry et al. 2014) dataset.
Dataset Splits No The paper mentions using d Sprites and Color-d Sprites datasets and evaluating with a specific metric, but it does not provide explicit details about training, validation, or test splits such as percentages or sample counts.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions following DCGAN architecture with batch normalization and using RMSProp for optimization, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Optimization is performed with RMSProp (Tieleman and Hinton 2012) with a momentum of 0.9. The batch size is 64 in all experiments. We constrain true and synthetic images to be normalized as [ 1, 1].