DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation

Authors: Alexandre Carlier, Martin Danelljan, Alexandre Alahi, Radu Timofte

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our network learns to accurately reconstruct diverse vector graphics, and can serve as a powerful animation tool by performing interpolations and other latent space operations. Our code is available at https://github.com/alexandre01/ deepsvg. We perform comprehensive experiments, demonstrating successful interpolation and manipulation of complex icons in vector-graphics format. Examples are presented in Fig. 1. 3. We introduce a large-scale dataset of SVG icons along with an open-source library for SVG manipulation, in order to facilitate further research in this area. To the best of our knowledge, this is the first work to explore generative models of complex vector graphics, and to show successful interpolation and manipulation results for this task. Table 2: Ablation study of our Deep SVG model showing results of the human study (1st rank % and average rank), and quantitative measurements (RE and IS) on train/test set.
Researcher Affiliation Academia Alexandre Carlier1,2 Martin Danelljan2 Alexandre Alahi1 Radu Timofte2 1 Ecole Polytechnique Fédérale de Lausanne 2 ETH Zurich
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/alexandre01/deepsvg.
Open Datasets Yes Thus, we first introduce a new dataset, called SVG-Icons81. It is composed of SVG icons obtained from the https://icons8.com website. Available at https://github.com/alexandre01/deepsvg.
Dataset Splits No The paper mentions 'train/test set' in Table 2, but does not provide specific details on validation dataset splits, percentages, or methodology.
Hardware Specification Yes We train our networks for 100 epochs with a total batch-size of 120 on two 1080Ti GPUs, which takes about one day.
Software Dependencies No The paper mentions the Adam W optimizer and other training parameters but does not specify software dependencies (e.g., Python, PyTorch/TensorFlow versions) with version numbers.
Experiment Setup Yes We use the Adam W [12] optimizer with initial learning rate 10 4, reduced by a factor of 0.9 every 5 epochs and a linear warmup period of 500 initial steps. We use a dropout rate of 0.1 in all transformer layers and gradient clipping of 1.0. We train our networks for 100 epochs with a total batch-size of 120 on two 1080Ti GPUs, which takes about one day.