Unsupervised Learning of Artistic Styles with Archetypal Style Analysis

Authors: Daan Wynen, Cordelia Schmid, Julien Mairal

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we introduce an unsupervised learning approach to automatically discover, summarize, and manipulate artistic styles from large collections of paintings. ... In this paper, we learn archetypal representations of style from image collections. ... In this section, we present our experimental results on two datasets described below.
Researcher Affiliation Academia Daan Wynen, Cordelia Schmid, Julien Mairal Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP , LJK, 38000 Grenoble, France firstname.lastname@inria.fr
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our implementation will be made publicly available. Further examples can be found at http://pascal.inrialpes.fr/data2/archetypal_style.
Open Datasets Yes Gan Gogh is a collection of 95997 artworks3 downloaded from Wiki Art.4 The images cover most of the freely available Wiki Art catalog... 3https://github.com/rkjones4/GANGogh 4https://www.wikiart.org
Dataset Splits No The paper mentions using datasets for learning archetypes and experimental results but does not provide specific train/validation/test dataset splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No Our implementation is in Py Torch [17] and relies in part on the universal style transfer implementation2. Archetypal analysis is performed using the SPAMS software package [2, 14], and the singular value decomposition is performed by scikit-learn [18]. The paper mentions software but does not provide specific version numbers for these dependencies.
Experiment Setup No The paper mentions some parameters like γ and δ, but does not provide specific experimental setup details such as learning rates, batch sizes, number of epochs, or optimizer settings for training the models.