Artistic Style Transfer with Internal-external Learning and Contrastive Learning

Authors: Haibo Chen, lei zhao, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments, showing that our proposed method can not only produce visually more harmonious and satisfying artistic images, but also promote the stability and consistency of rendered video clips. In this section, we first introduce the experimental settings. Then we present qualitative and quantitative comparisons between the proposed method and several baseline models. Finally, we discuss the effect of each component in our model by conducting ablation studies.
Researcher Affiliation Academia College of Computer Science and Technology, Zhejiang University {cshbchen, cszhl, endywon, qinglanwuji, zzwcs, liailin, wxing, ldm}@zju.edu.cn
Pseudocode No The paper describes the method using mathematical equations and textual explanations, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/HalbertCH/IEContraAST.
Open Datasets Yes Like [15, 58, 36, 19], we take MS-COCO [33] and Wiki Art [22] as the content dataset and style dataset, respectively.
Dataset Splits No The paper describes the training stage and the use of MS-COCO and Wiki Art datasets, but does not explicitly specify a validation dataset split or percentages for training/validation/test.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using SANet as a backbone and Adam optimizer, but does not provide specific version numbers for software dependencies like Python, PyTorch/TensorFlow, or CUDA.
Experiment Setup Yes The hyper-parameter τ in Equation (5) and (6) is set to 0.2. The loss weights in Equation (4) and (7) are set to λidentity1 = 50, λidentity2 = 1, λ1 = 1, λ2 = 5, λ3 = 1, λ4 = 1, λ5 = 0.3, and λ6 = 0.3. We train our network using the Adam optimizer with a learning rate of 0.0001 and a batch size of 16 for 160000 iterations.