Universal Style Transfer via Feature Transforms

Authors: Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We evaluate the proposed algorithm with existing approaches extensively on both style transfer and texture synthesis tasks and present in-depth analysis. 4 Experimental Results
Researcher Affiliation Collaboration Yijun Li UC Merced yli62@ucmerced.edu; Chen Fang Adobe Research cfang@adobe.com; Jimei Yang Adobe Research jimyang@adobe.com; Zhaowen Wang Adobe Research zhawang@adobe.com; Xin Lu Adobe Research xinl@adobe.com; Ming-Hsuan Yang UC Merced, NVIDIA Research mhyang@ucmerced.edu
Pseudocode No The paper includes pipeline diagrams but no explicit pseudocode or algorithm blocks.
Open Source Code Yes The models and code are available at https://github.com/Yijunmaverick/Universal Style Transfer.
Open Datasets Yes It is trained on the Microsoft COCO dataset [22]; [22] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
Dataset Splits No The paper states training on the Microsoft COCO dataset but does not provide specific training/validation/test splits, percentages, or sample counts.
Hardware Specification Yes Quantitative comparisons between different stylization methods in terms of the covariance matrix difference (Ls), user preference and run-time, tested on images of size 256 256 and a 12GB TITAN X.
Software Dependencies No The paper mentions models and datasets but does not specify any software libraries, frameworks, or their version numbers used for implementation (e.g., PyTorch, TensorFlow, or specific Python versions).
Experiment Setup Yes The pixel reconstruction loss [5] and feature loss [16, 5] are employed for reconstructing an input image... In addition, λ is the weight to balance the two losses. For the multi-level stylization approach... the weight λ to balance the two losses in (1) is set as 1. After the WCT, we may blend ˆ fcs with the content feature fc as in (4) before feeding it to the decoder... ˆ fcs = α ˆ fcs + (1 α) fc , (4) where α serves as the style weight for users to control the transfer effect. For our results, we set the style weight α = 0.6.