Inferring Painting Style with Multi-Task Dictionary Learning
Authors: Gaowen Liu, Yan Yan, Elisa Ricci, Yi Yang, Yahong Han, Stefan Winkler, Nicu Sebe
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experimental evaluation shows that our approach significantly outperforms state-of-the-art methods. In this section we introduce the DART dataset and evaluate the effectiveness of our method. |
| Researcher Affiliation | Academia | 1DISI, University of Trento, Italy 2ADSC, UIUC, Singapore 3Fondazione Bruno Kessler, Italy 4QCIS, University of Technology Sydney, Australia 5Tianjin University, China |
| Pseudocode | Yes | Algorithm 1: Learning artist-specific and style-specific dictionaries. |
| Open Source Code | No | The source code for the optimization will be made available online. |
| Open Datasets | No | We collected the DART dataset which contains paintings from different art movements and different artists. The paper introduces the dataset but does not provide any specific access information (e.g., URL, DOI, repository) for it to be publicly available. |
| Dataset Splits | Yes | In our experiments we randomly split the dataset into two parts, half for training and half for testing. We set the regularization parameters, the subspace dimensionality s and the dictionary size l with cross-validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify the versions of any ancillary software, libraries, or programming languages used for the implementation. |
| Experiment Setup | Yes | We set the regularization parameters, the subspace dimensionality s and the dictionary size l with cross-validation. Fig. 4(middle) shows that the proposed method achieves the best results when the dictionary size is 100 and the subspace dimensionality is 25. |