Deep Style Transfer for Line Drawings
Authors: Xueting Liu, Wenliang Wu, Huisi Wu, Zhenkun Wen353-361
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results and statistics show that our method significantly outperforms the existing methods both visually and quantitatively. ... Extensive experiments are performed on line drawings of different content and styles. Convincing results are obtained. ... We compare our method with the state-of-the-art style transfer, image generation, and style-content disentanglement methods. ... We further compare our method with the existing methods via two quantitative experiments. |
| Researcher Affiliation | Academia | Xueting Liu, Wenliang Wu, Huisi Wu , Zhenkun Wen College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, China hswu@szu.edu.cn |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | No | To train our networks, we collect 4500 line drawings of various content and styles from the Internet. We randomly select 4,300 from them as the training dataset, and the rest 200 are used as the test set. The paper does not provide access information (link, DOI, specific citation) for this collected dataset. |
| Dataset Splits | No | The paper mentions 4,300 images for training and 200 for testing, but does not explicitly state a validation split or its size. It only specifies 'training dataset' and 'test set'. |
| Hardware Specification | Yes | We tested the traditional style transfer method Frigo on a PC with Intel Core i7-9700 3.0GHz CPU and 16GB memory, and all the other methods on an RTX 2080Ti GPU. |
| Software Dependencies | No | The paper mentions software like Matlab and frameworks like U-net and VGG, and the Adam optimizer, but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | The learning rate is initially set to 1e 4 and gradually decreases to 2e 6. The optimization converges in about 80 epochs. |