CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis

Authors: Shichong Peng, Seyed Alireza Moazenipourasil, Ke Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show CHIMLE significantly outperforms the prior best IMLE, GAN and diffusion-based methods in terms of image fidelity and mode coverage across four tasks, namely night-to-day, 16 single image super-resolution, image colourization and image decompression. Quantitatively, our method improves Fréchet Inception Distance (FID) by 36.9% on average compared to the prior best IMLE-based method, and by 27.5% on average compared to the best non-IMLE-based general-purpose methods.
Researcher Affiliation Collaboration Shichong Peng1, Alireza Moazeni1, Ke Li1,2 1APEX Lab School of Computing Science 2Google Simon Fraser University {shichong_peng,seyed_alireza_moazenipourasil,keli}@sfu.ca
Pseudocode Yes Algorithm 1 Conditional Hierarchical IMLE Algorithm for a Single Data Point
Open Source Code Yes More results and code are available on the project website at https://niopeng.github.io/CHIMLE/.
Open Datasets Yes Details for datasets are included in the appendix.
Dataset Splits No No. While the paper states 'Details for datasets are included in the appendix' and the checklist claims 'data splits' are specified, the main body of the paper does not explicitly detail the training/validation/test dataset splits (e.g., percentages or counts).
Hardware Specification Yes We trained all models for 150 epochs with a mini-batch size of 1 using the Adam optimizer [44] on an NVIDIA V100 GPU.
Software Dependencies No No. The paper mentions using the 'Adam optimizer [44]' and various metrics and baselines with citations, but does not specify software dependencies with version numbers (e.g., Python, PyTorch, or library versions).
Experiment Setup Yes We trained all models for 150 epochs with a mini-batch size of 1 using the Adam optimizer [44] on an NVIDIA V100 GPU.