Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data
Authors: Utkarsh Ojha, Krishna Kumar Singh, Cho-Jui Hsieh, Yong Jae Lee
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both artificial (MNIST, 3D cars, 3D chairs, Shape Net) and real-world (You Tube-Faces) imbalanced datasets demonstrate the effectiveness of our method in disentangling object identity as a latent factor of variation. |
| Researcher Affiliation | Collaboration | Utkarsh Ojha1 Krishna Kumar Singh1,2 Cho-Jui Hsieh3 Yong Jae Lee1 1UC Davis 2Adobe Research 3UCLA |
| Pseudocode | No | The paper includes mathematical formulations and block diagrams (e.g., Figure 2) to illustrate the model, but it does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor are there any structured code-like procedural steps. |
| Open Source Code | Yes | utkarshojha.github.io/elastic-infogan/ |
| Open Datasets | Yes | Datasets (1) MNIST [34]... (2) 3D Cars [17]... (3) 3D Chairs [2]... (4) Shape Net... (5) You Tube-Faces [51]... |
| Dataset Splits | Yes | We train the classifier by creating a 80/20 train/val split on a per class basis. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware specifications (e.g., GPU model, CPU, memory) used for conducting the experiments. |
| Software Dependencies | No | The paper discusses the use of Gumbel-Softmax as a technique and implies common deep learning frameworks, but it does not explicitly list any software dependencies with specific version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper mentions parameters like softmax temperature (τ) and loss weights (λ1, λ2) in the equations and discussion. However, it lacks specific numerical values for common experimental setup details such as learning rates, batch sizes, optimizers, or the number of training epochs in the main text. |