Stabilizing GANs’ Training with Brownian Motion Controller

Authors: Tianjiao Luo, Ziyu Zhu, Jianfei Chen, Jun Zhu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that our GANs-BMC effectively stabilizes GANs training under Style GANv2-ada frameworks with a faster rate of convergence, a smaller range of oscillation, and better performance in terms of FID score.
Researcher Affiliation Collaboration 1Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University 2Pazhou Lab (Huangpu), Guangzhou, China.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for its methodology.
Open Datasets Yes We evaluate our proposed GANs-BMC on well-established CIFAR10 (Krizhevsky et al., 2009), LSUNBedroom with resolution 256x256 (Yu et al., 2015), LSUN-Cat with resolution 256x256 (Yu et al., 2015), and FFHQ with resolution 1024x1024 (Karras et al., 2019).
Dataset Splits Yes We reproduce the identical configuration settings as reported in the Style GANv2-ada paper within the period of 7 days on 4 cards of NVIDIA Ge Force GTX TITAN X. The detailed experimental setups can be found in Appendix C. (Implicitly uses standard splits for well-known datasets like CIFAR-10 and FFHQ which are typically split for training/validation/testing)
Hardware Specification Yes We reproduce the identical configuration settings as reported in the Style GANv2-ada paper within the period of 7 days on 4 cards of NVIDIA Ge Force GTX TITAN X.
Software Dependencies No The paper mentions software like Style GANv2-ada and the Adam optimizer, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The detailed experimental setups can be found in Appendix C. Table 4 and Table 5 provide details such as Dataset, Batch Size, Learning Rate, Optimizer, and GPUs used.