A Learned Representation For Artistic Style
Authors: Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. |
| Researcher Affiliation | Industry | Vincent Dumoulin & Jonathon Shlens & Manjunath Kudlur Google Brain, Mountain View, CA vi.dumoulin@gmail.com, shlens@google.com, keveman@google.com |
| Pseudocode | No | The paper does not contain a structured pseudocode or algorithm block. Figure 3 shows mathematical equations, not pseudocode. |
| Open Source Code | Yes | A complete implementation of the model in Tensor Flow (Abadi et al., 2016) as well as a pretrained model are available for download 1. 1https://github.com/tensorflow/magenta |
| Open Datasets | Yes | Our training procedure follows Johnson et al. (2016). Briefly, we employ the Image Net dataset (Deng et al., 2009) as a corpus of training content images. |
| Dataset Splits | No | The paper mentions 'evaluation images' but does not specify explicit training, validation, and test dataset splits with percentages or counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' but does not provide a specific version number for the software used in their implementation (e.g., TensorFlow 1.x or 2.x). |
| Experiment Setup | Yes | Unless noted otherwise, all style transfer networks were trained using the hyperparameters outlined in the Appendix s Table 1. ... Optimizer Adam (Kingma & Ba, 2014) (α = 0.001, β1 = 0.9, β2 = 0.999), Parameter updates 40,000, Batch size 16, Weight initialization Isotropic gaussian (µ = 0, σ = 0.01) |