Chi-square Generative Adversarial Network
Authors: Chenyang Tao, Liqun Chen, Ricardo Henao, Jianfeng Feng, Lawrence Carin Duke
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the proposed procedure improves stability and convergence, and yields state-of-art results on a wide range of generative modeling tasks. |
| Researcher Affiliation | Academia | 1Electrical & Computer Engineering, Duke University, Durham, NC 27708, USA 2ISTBI, Fudan University, Shanghai, China. |
| Pseudocode | Yes | Algorithm 1 χ2 GAN. Input: data {xi}, batchsize b, decay ρ, learning rate δ. for t = 1, 2, 3, . . . do... |
| Open Source Code | Yes | Details of the experimental setup are in the SM, and code for our experiments are available from https://www.github.com/ chenyang-tao/chi2gan. |
| Open Datasets | Yes | MNIST We used the binarized MNIST in this experiment and compared with the results from prior results in Table 1. |
| Dataset Splits | No | The paper mentions training and testing on standard datasets (e.g., MNIST, CIFAR-10), but it does not provide specific details on the training, validation, and test splits (e.g., exact percentages or sample counts). |
| Hardware Specification | Yes | All experiments are implemented with Tensorflow and run on a single NVIDIA TITAN X GPU. |
| Software Dependencies | No | All experiments are implemented with Tensorflow and run on a single NVIDIA TITAN X GPU. |
| Experiment Setup | No | In all experiments we have used Xaiver initialization and Adam optimizer. |