Co-Generation with GANs using AIS based HMC

Authors: Tiantian Fang, Alexander Schwing

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed approach on synthetic data and imaging data (Celeb A and LSUN), showing compelling results via MSE and MSSIM metrics. The presented approach significantly outperforms classical gradient based methods on a synthetic and on the Celeb A and LSUN datasets. 4 Experiments Baselines: In the following, we evaluate the proposed approach on synthetic and imaging data.
Researcher Affiliation Academia Tiantian Fang University of Illinois at Urbana-Champaign tf6@illinois.edu Alexander G. Schwing University of Illinois at Urbana-Champaign aschwing@illinois.edu
Pseudocode Yes Algorithm 1 AIS based HMC
Open Source Code Yes The code is available at https://github.com/Ailsa F/cogen_by_ais.
Open Datasets Yes We evaluate the proposed approach on synthetic data and imaging data (Celeb A and LSUN)
Dataset Splits No No explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or specific predefined split references) are provided for the reconstruction task. The paper mentions using well-known datasets (Celeb A, LSUN) and progressive GAN training iterations, but not how *their* data was partitioned for training/validation/testing of their co-generation method.
Hardware Specification No We thank NVIDIA for providing GPUs used for this work and Cisco for access to the Arcetri cluster. This mentions GPUs and a cluster name but lacks specific models (e.g., NVIDIA A100, Tesla V100) or detailed cluster specifications to be considered 'specific hardware details'.
Software Dependencies No No specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9') are explicitly mentioned in the paper.
Experiment Setup Yes We use a sigmoid schedule for the parameter βt, i.e., we linearly space T 1 temperature values within a range and apply a sigmoid function to these values to obtain βt. We use 0.01 as the leapfrog step size and employ 10 leapfrog updates per HMC loop for the synthetic 2D dataset and 20 leapfrog updates for the real dataset at first. The acceptance rate is 0.65, as recommended by Neal [59]. Low acceptance rate means the leapfrog step size is too large in which case the step size will be decreased by 0.98 automatically. In contrast, a high acceptance rate will increase the step size by 1.021.