Triple Generative Adversarial Nets
Authors: Chongxuan LI, Taufik Xu, Jun Zhu, Bo Zhang
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we evaluate our model on the widely adopted MNIST [14], SVHN [19] and CIFAR10 [12] datasets. The results (See Sec. 5) demonstrate that Triple-GAN can simultaneously learn a good classifier and a conditional generator, which agrees with our motivation and theoretical results. |
| Researcher Affiliation | Academia | Chongxuan Li, Kun Xu, Jun Zhu , Bo Zhang Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab of Intell. Tech. & Sys., Center for Bio-Inspired Computing Research, Tsinghua University, Beijing, 100084, China {licx14, xu-k16}@mails.tsinghua.edu.cn, {dcszj, dcszb}@mail.tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1 Minibatch stochastic gradient descent training of Triple-GAN in SSL. |
| Open Source Code | Yes | Our source code is available at https://github.com/zhenxuan00/triple-gan |
| Open Datasets | Yes | We evaluate our model on the widely adopted MNIST [14], SVHN [19] and CIFAR10 [12] datasets. |
| Dataset Splits | Yes | MNIST consists of 50,000 training samples, 10,000 validation samples and 10,000 testing samples of handwritten digits of size 28 28. ... We split 5,000 training data of SVHN and CIFAR10 for validation if needed. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory, or specific computing environments used for the experiments. |
| Software Dependencies | No | The paper mentions implementing the method based on 'Theano [27]' but does not specify a version number for Theano or any other software dependencies. |
| Experiment Setup | Yes | We only search the threshold in {200, 300}, αP in {0.1, 0.03} and the global learning rate in {0.0003, 0.001} based on the validation performance on each dataset. All of the other hyperparameters including relative weights and parameters in Adam [9] are fixed according to [25, 15] across all of the experiments. |