Activation Maximization Generative Adversarial Nets
Authors: Zhiming Zhou, Han Cai, Shu Rong, Yuxuan Song, Kan Ren, Weinan Zhang, Jun Wang, Yong Yu
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art Inception Score (8.91) on CIFAR-10. |
| Researcher Affiliation | Collaboration | Zhiming Zhou, Han Cai Shanghai Jiao Tong University heyohai,hcai@apex.sjtu.edu.cn Shu Rong Yitu Tech shu.rong@yitu-inc.com Yuxuan Song, Kan Ren Shanghai Jiao Tong University songyuxuan,kren@apex.sjtu.edu.cn Jun Wang University College London j.wang@cs.ucl.ac.uk Weinan Zhang, Yu Yong Shanghai Jiao Tong University wnzhang@sjtu.edu.cn, yyu@apex.sjtu.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The repeatable experiment code is published for further research3. 3Link for anonymous experiment code: https://github.com/ZhimingZhou/AM-GAN |
| Open Datasets | Yes | We conduct experiments on the image benchmark datasets including CIFAR-10 and Tiny-Image Net2 which comprises 200 classes with 500 training images per class. 2https://tiny-imagenet.herokuapp.com/ |
| Dataset Splits | No | The paper mentions using CIFAR-10 and Tiny-Image Net datasets but does not explicitly provide details about specific training/validation/test splits, such as percentages or sample counts for each partition. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers, such as library or solver names and their exact versions. |
| Experiment Setup | Yes | Optimizer: Adam with beta1=0.5, beta2=0.999; Batch size=100. Learning rate: Exponential decay with stair, initial learning rate 0.0004. We use weight normalization for each weight |