Painterly Image Harmonization in Dual Domains
Authors: Junyan Cao, Yan Hong, Li Niu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the benchmark dataset show the effectiveness of our method. Our code and model are available at https:// github.com/bcmi/PHDNet-Painterly-Image-Harmonization. |
| Researcher Affiliation | Academia | Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University {joy c1, ustcnewly}@sjtu.edu.cn, yanhong.sjtu@gmail.com |
| Pseudocode | No | The paper describes the method using textual descriptions and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and model are available at https:// github.com/bcmi/PHDNet-Painterly-Image-Harmonization. |
| Open Datasets | Yes | We conduct experiments on COCO (Lin et al. 2014) and Wiki Art (Tan et al. 2019). |
| Dataset Splits | No | The paper mentions using COCO and Wiki Art datasets but does not explicitly provide details on train/validation/test splits with specific percentages or sample counts for model training within the main text. It states 'Refer to the Supplementary for more implementation details.' |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper states 'Refer to the Supplementary for more implementation details' but does not specify software dependencies (e.g., library names with version numbers like Python 3.8, PyTorch 1.9) within the main text. |
| Experiment Setup | Yes | So far, the total loss for training G is summarized as LG = Ls + λc Lc + λadv LGadv, (6) where the trade-off parameters λc and λadv are set to 2 and 10 respectively in our experiments. |