Preserving Structural Consistency in Arbitrary Artist and Artwork Style Transfer

Authors: Jingyu Wu, Lefan Hou, Zejian Li, Jun Liao, Li Liu, Lingyun Sun

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We broadly evaluate our method across six large-scale benchmark datasets. Empirical results show that our method achieves arbitrary artist-style and artwork-style extraction from a single artwork, and effectively avoids introducing the style image s structural features. Our method improves the state-of-the-art deception rate from 58.9% to 67.2% and the average FID from 48.74 to 42.83.
Researcher Affiliation Collaboration 1Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Zhejiang University, Hangzhou 310027, China 2School of Software Technology, Zhejiang University, Ningbo 315048, China 3School of Big Data & Software Engineering, Chongqing University, Chongqing 400044, China 4Zhejiang-Singapore Innovation and AI Joint Research Lab, Hangzhou 310027, China
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes Datasets. For artwork-style and artist-style transfer, we use Wiki Art (Karayev et al. 2013) for content and style images. For photo-realistic style transfer, we evaluate the performance of preserving structural consistency on the following five large datasets: (1) LSUN Church (Yu et al. 2015), (2) LSUN Bedrroms (Yu et al. 2015), (3) Flickr Faces HQ (FFHQ) (Karras, Laine, and Aila 2019), (4) Flickr Waterfalls (100k self-collected images) (Cai et al. 2021), (5) Celeb AHQ (Karras et al. 2017).
Dataset Splits No The paper mentions 'training' but does not provide specific train/validation/test dataset splits, percentages, or sample counts, nor does it refer to predefined splits for the datasets used.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running its experiments.
Software Dependencies No The paper does not provide specific version numbers for ancillary software, libraries, or frameworks used in the experiments.
Experiment Setup No The paper mentions resolution for training data and loss functions, but it does not provide specific hyperparameter values such as learning rate, batch size, number of epochs, or optimizer settings necessary for a reproducible experimental setup.