ClusterGAN: Latent Space Clustering in Generative Adversarial Networks
Authors: Sudipto Mukherjee, Himanshu Asnani, Eugene Lin, Sreeram Kannan4610-4617
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets. We compare Cluster GAN and other possible GAN based clustering algorithms, such as Info GAN, along with multiple clustering baselines on varied datasets. This demonstrates the superior performance of Cluster GAN for the clustering task. |
| Researcher Affiliation | Academia | Sudipto Mukherjee, Himanshu Asnani, Eugene Lin, Sreeram Kannan {sudipm, asnani, lines, ksreeram}@uw.edu Department of Electrical and Computer Engineering, University of Washington, Seattle, WA-98195. |
| Pseudocode | No | The paper describes algorithmic steps in text but does not include structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code for Cluster GAN is available at https: //github.com/sudiptodip15/Cluster GAN. |
| Open Datasets | Yes | MNIST It consists of 70k images of digits ranging from 0 to 9. Each data sample is a 28 28 greyscale image. Fashion-MNIST (10 and 5 classes) This dataset has the same number of images with the same image size as MNIST, but it is fairly more complicated. The dataset consists of RNA-transcript counts of 73233 data points belonging to 8 different cell types (Zheng et al. 2017). |
| Dataset Splits | No | The paper mentions datasets like MNIST with total sizes but does not provide specific training, validation, and test split percentages or sample counts. While it may imply standard splits, it does not explicitly state them for its own experimental setup. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of Adam optimizer and WGAN-GP variant but does not provide specific ancillary software details with version numbers, such as programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | Some choices of regularization parameters λ = 10, βn = 10, βc = 10 worked well across all datasets. Adam (Kingma and Ba 2014) is used for the updates during Backprop decoding. |