Image Generation using Continuous Filter Atoms

Authors: Ze Wang, Seunghyun Hwang, Zichen Miao, Qiang Qiu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We support the proposed framework of image generation with continuous filter atoms using various experiments, including image-to-image translation and image generation conditioned on continuous labels.
Researcher Affiliation Academia Ze Wang Seunghyun Hwang Zichen Miao Qiang Qiu Purdue University {zewang, hwang229, miaoz, qqiu}@purdue.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any links to open-source code or explicitly state that the code will be made publicly available.
Open Datasets Yes We present results on various datasets including edges ! handbags, edges ! shoes, maps ! satellite, night ! days, and labels ! facades. Paired samples. In this experiment, we plug in our continuous filter atoms to the Pix2Pix [17] model and conduct continuous conditional image-to-image translation tasks on five different synthetic datasets, RC-49 [8], RA-20, RCA-20, RT-20, RL-20, where objects in each dataset are rendered to rotate between 0 and 360 with 0.1 interval. Unpaired samples. For translation tasks on unpaired datasets, we present results on UTKFace [56], Steering Angle, cells-200, and Waymo [41] dataset.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits with specific percentages, counts, or references to predefined split methodologies.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper does not provide a reproducible description of ancillary software, as no specific version numbers for key software components are mentioned.
Experiment Setup No Implementation details are in Appendix Section B.