Creative Sketch Generation

Authors: Songwei Ge, Vedanuj Goswami, Larry Zitnick, Devi Parikh

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches.
Researcher Affiliation Collaboration Songwei Ge University of Maryland, College Park songweig@umd.edu Vedanuj Goswami & C. Lawrence Zitnick Facebook AI Research {vedanuj,zitnick}@fb.com Devi Parikh Facebook AI Research Georgia Institute of Technology parikh@gatech.edu
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Our datasets, code, and a web demo are publicly available 1. songweige.github.io/projects/creative_sketech_generation/home.html
Open Datasets Yes Our datasets, code, and a web demo are publicly available 1. songweige.github.io/projects/creative_sketech_generation/home.html ... To this end, we trained an Inception model on the Quick Draw3.8M dataset (Xu et al., 2020).
Dataset Splits Yes The dataset contains 345 classes and each class contains 9, 000 training samples, 1, 000 validation samples, and 1, 000 test samples.
Hardware Specification Yes Our training time of each creature and bird part generator is approximately 4 and 2 days on a single NVIDIA Quadro GV100 Volta GPU.
Software Dependencies No The paper mentions using the Adam optimizer and Style GAN2 architecture but does not specify version numbers for general software dependencies like Python, PyTorch/TensorFlow, or CUDA.
Experiment Setup Yes We picked a learning rate of 10^-4 and a batch size of 40 for both the discriminator and generator. We use the Adam optimizer... with β1 = 0, β2 = 0.99, ϵ = 10^-8. ... We train the creature part generators for 60, 000 steps and bird part generators for 30, 000 steps.