SketchKnitter: Vectorized Sketch Generation with Diffusion Models

Authors: Qiang Wang, Haoge Deng, Yonggang Qi, Da Li, Yi-Zhe Song

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate our model in two modes, i.e., unconditional and conditional generation, to verify the quality of the generated data and the ability to mend inferior sketches of our model. Please refer to Appendix A.6 for implementation details. [...] Table 1: Quantitative comparison results. [...] Qualitative results. Figure 2(a) shows some examples of reverse-time diffusion process, i.e., from random noise till reach the data sample, the generated sketch at each step exhibits different (reduced) level of distortion. More examples of unconditional generation are demonstrated in Figure 2(b).
Researcher Affiliation Academia Qiang Wang1 Haoge Deng1 Yonggang Qi1 Da Li2 Yi-Zhe Song2 1Beijing University of Posts and Telecommunications, CN 2Sketch X, CVSSP, University of Surrey, UK
Pseudocode No The paper describes the diffusion process and generative process using mathematical equations and textual explanations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No Code to be found at Git Hub page 1https://www.ravelry.com/ (Note: The provided URL links to a knitting/crochet community website, not a code repository.)
Open Datasets Yes We evaluate our proposed method on Quick Draw Ha & Eck (2018), which contains over 50M sketches in vector format across 345 common categories.
Dataset Splits No The paper states 'The original data split is adopted, i.e., 70,000 training and 2,500 testing sketches for each class.' It mentions training and testing splits but does not provide specific details for a validation split.
Hardware Specification Yes A single Nvidia 3090 GPU is used for model training.
Software Dependencies No The paper mentions 'Adam optimizer' but does not specify version numbers for any software, libraries (e.g., PyTorch, TensorFlow), or programming languages used for implementation.
Experiment Setup Yes The batch size is set to 512. The point number is selected as 96... The default setting of skipping stride m = 50, and the recognizability threshold ζ = 0.2. ... β1 = 10 4 and βT = 0.02 in our case. Then the mean is 1 βt and variance is βt in Eq 1. Adam optimizer (β1 = 0.9 and β2 = 0.98) is used for optimization.