CircleGAN: Generative Adversarial Learning across Spherical Circles

Authors: Woohyeon Shim, Minsu Cho

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments, we validate the effectiveness for both unconditional and conditional generation on standard benchmarks, achieving the state of the art.
Researcher Affiliation Academia Woohyeon Shim POSTECH Ci TE wh.shim@postech.ac.kr Minsu Cho POSTECH CSE & GSAI mscho@postech.ac.kr
Pseudocode Yes Algorithm 1: Training Circle GAN
Open Source Code No The paper provides links to repositories for Inception Score (IS) and FID calculation ('IS: https://github.com/openai/improved-gan, FID: https://github.com/bioinf-jku/TTUR'), but no concrete access to the source code for the Circle GAN methodology itself.
Open Datasets Yes We conduct experiments in both unconditional and conditional settings of GANs to demonstrate the effectiveness of the proposed methods... on standard benchmark datasets, including STL10, CIFAR10, CIFAR100, and Tiny Imagenet.
Dataset Splits No The paper mentions using a 'validation set' for evaluating GAN-train scores and t-SNE embeddings, and refers to 'standard benchmark datasets', but does not provide specific dataset split information (exact percentages, sample counts, or explicit citations to predefined splits) for training, validation, and test sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using a 'Res Net-based architecture' and references external implementations for IS and FID, but it does not provide specific ancillary software details, such as library names with version numbers (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes τ adjusts the range of score difference for the sigmoid functions and is set to 5 for sadd and sreal and 10 for smult.