Progressive Painterly Image Harmonization from Low-Level Styles to High-Level Styles
Authors: Li Niu, Yan Hong, Junyan Cao, Liqing Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the benchmark dataset demonstrate the effectiveness of our progressive harmonization network. |
| Researcher Affiliation | Academia | Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University {ustcnewly, hy2628982280, joy c1, lqzhang}@sjtu.edu.cn |
| Pseudocode | No | The paper describes its network architecture and algorithms in prose and figures, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about making its source code open or available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Following previous works (Peng, Wang, and Wang 2019; Cao, Hong, and Niu 2023), we conduct experiments on COCO (Lin et al. 2014) and Wiki Art (Nichol 2016). COCO (Lin et al. 2014) contains instance segmentation annotations for 80 object categories, while Wiki Art (Nichol 2016) contains digital artistic paintings from different styles. |
| Dataset Splits | No | The paper mentions using a 'training set' and 'test set' for training the GRU and evaluating the model, and describes how composite images were created, but it does not provide specific percentages or sample counts for how the main dataset was split into training, validation, and test sets for model development. |
| Hardware Specification | Yes | Our model is implemented by Py Torch 1.10.0, which is distributed on ubuntu 20.04 LTS operation system, with 128GB memory, Intel(R) Xeon(R) Silver 4116 CPU, and one Ge Force RTX 3090 GPU. |
| Software Dependencies | Yes | Our model is implemented by Py Torch 1.10.0, which is distributed on ubuntu 20.04 LTS operation system |
| Experiment Setup | Yes | We resize the input images as 256 256 and set the batch size as 4 for model training. We adopt Adam (Kingma and Ba 2015) with learning rate of 0.0001 as the optimization solver. |