Scribble-to-Painting Transformation with Multi-Task Generative Adversarial Networks
Authors: Jinning Li, Yexiang Xue
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental result shows that DSP-Net outperforms state-of-the-art models both visually and quantitatively. |
| Researcher Affiliation | Academia | Jinning Li1 and Yexiang Xue2 1Shanghai Jiao Tong University 2Purdue University lijinning@sjtu.edu.cn, yexiang@purdue.edu |
| Pseudocode | No | The paper describes the network architecture and mathematical objectives, but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and dataset could be found at https://github.com/jinningli/DSP-Net. |
| Open Datasets | Yes | In this paper, we build a triple dataset including scribbles, paintings and semantic images based on COCO dataset [Lin et al., 2014]. |
| Dataset Splits | No | We split the dataset into 4500 images for training and 500 images for a test. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., GPU models, CPU types, or cloud resources). |
| Software Dependencies | No | The paper mentions various models and algorithms used (e.g., VGG19 network, Pix2pix, Cycle GAN), but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We train the GAN based models for 200 epochs and neural style for 1000 iterations. We keep the other hyper-parameters and basic settings as the recommended value mentioned in their original code. |