OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

Authors: Bingchen Liu, Yizhe Zhu, Zuohui Fu, Gerard de Melo, Ahmed Elgammal4836-4843

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct quantitative and qualitative experiments to demonstrate the advantages of our method on several datasets.
Researcher Affiliation Academia Department of Computer Science Rutgers University {bingchen.liu, yizhe.zhu, zuohui.fu, gerard.demelo}@rutgers.edu, elgammal@cs.rutgers.edu
Pseudocode No The paper includes diagrams of model structures but does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes All the code to reproduce our experiments is available on Git Hub, and training configurations can be found there.
Open Datasets Yes quantitative experiments on the d Sprites datasets (Matthey et al. 2017) following the metric proposed by Kim and Mnih (2018). After that, we show the superiority of OOGAN in generating high-quality images while maintaining competitive disentanglement compared to VAE-based models on Celeb A (Liu et al. 2015) and 3D-chair (Aubry etal. 2014) data.
Dataset Splits No The paper uses standard datasets and refers to setups from prior work (Kim and Mnih 2018; Jeon, Lee, and Kim 2019), but it does not explicitly state the specific training, validation, and test split percentages or sample counts within its own text.
Hardware Specification Yes We perform all the experiments on one NVIDIA RTX 2080Ti GPU
Software Dependencies No The paper states that code is available on GitHub with training configurations but does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes On the 3D Chairs data, we use 64x64 RGB images with batch size 64 for all training runs. and where varying the hyperparameters of λ (1 to 5) and γ (0.2 to 2) in our loss function always yields consistent performance.