Generative Warfare Nets: Ensemble via Adversaries and Collaborators

Authors: Honglun Zhang, Liqiang Xiao, Wenqing Chen, Yongkun Wang, Yaohui Jin

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three natural image datasets show that GWN can achieve state-of-the-art Inception scores and produce diverse high-quality synthetic results. In this section, we conduct detailed discussions about the implementations of GWN, investigate the performances of GWN on three natural image datasets and compare it to stateof-the-art baselines.
Researcher Affiliation Academia 1State Key Lab of Advanced Optical Communication System and Network, Shanghai Jiao Tong University 2Artificial Intelligence Institute, Shanghai Jiao Tong University 3Network and Information Center, Shanghai Jiao Tong University {jinyh}@sjtu.edu.cn
Pseudocode Yes Algorithm 1 Training Generative Warfare Nets
Open Source Code No The paper does not contain any statement about releasing source code or a link to a code repository for the methodology.
Open Datasets Yes We select three natural image datasets with increasing diversities and sizes to conduct experiments for GWN, CIFAR10 [Krizhevsky and Hinton, 2009], STL-10 [Coates et al., 2011] and Image Net [Russakovsky et al., 2015].
Dataset Splits No The paper mentions using CIFAR-10, STL-10, and Image Net datasets but does not explicitly state specific train/validation/test splits (e.g., percentages, sample counts, or citations to standard splits used for partitioning).
Hardware Specification No The paper does not provide any specific hardware details like CPU/GPU models, memory, or cloud computing specifications used for running experiments.
Software Dependencies No The paper mentions 'Adam optimizers' and 'layer normalization' but does not specify software components with version numbers (e.g., 'Python 3.x', 'PyTorch 1.x') required to replicate the experiment.
Experiment Setup Yes Description Setting batch size K = 64 gradient penalty λ = 10 discriminator times nd = 5 Adam hyperparameters α = 0.0001, β1 = 0.5, β2 = 0.9