PixelGAN Autoencoders
Authors: Alireza Makhzani, Brendan J. Frey
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We further show how the Pixel GAN autoencoder with a categorical prior can be directly used in semi-supervised settings and achieve competitive semi-supervised classification results on the MNIST, SVHN and NORB datasets. In this section, we only present the performance of the Pixel GAN autoencoder on downstream tasks such as unsupervised clustering and semi-supervised classification. |
| Researcher Affiliation | Academia | Alireza Makhzani, Brendan Frey University of Toronto {makhzani,frey}@psi.toronto.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that code is available. |
| Open Datasets | Yes | We further show how the Pixel GAN autoencoder with a categorical prior can be directly used in semi-supervised settings and achieve competitive semi-supervised classification results on the MNIST, SVHN and NORB datasets. |
| Dataset Splits | Yes | once the training is done, for each cluster i, we found the validation example xn that maximizes q(zi|xn), and assigned the label of xn to all the points in the cluster i. we train a Pixel GAN autoencoder on the first three digits of MNIST (18000 training and 3000 test points) |
| Hardware Specification | No | The paper does not provide specific hardware details (like exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. While "NVIDIA for GPU donations" is mentioned in acknowledgments, it does not specify the hardware used for the experiments. |
| Software Dependencies | No | The paper mentions TensorFlow in its references as software available, but does not provide specific version numbers for TensorFlow or any other key software components, libraries, or solvers used for the experiments. |
| Experiment Setup | Yes | We train all models for 200 epochs on MNIST and 1000 epochs on SVHN using Adam optimizer [40] with a learning rate of 0.0002 and a batch size of 64. |