Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Painterly Image Harmonization in Dual Domains
Authors: Junyan Cao, Yan Hong, Li Niu
AAAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the benchmark dataset show the effectiveness of our method. Our code and model are available at https:// github.com/bcmi/PHDNet-Painterly-Image-Harmonization. |
| Researcher Affiliation | Academia | Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University EMAIL, EMAIL |
| Pseudocode | No | The paper describes the method using textual descriptions and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and model are available at https:// github.com/bcmi/PHDNet-Painterly-Image-Harmonization. |
| Open Datasets | Yes | We conduct experiments on COCO (Lin et al. 2014) and Wiki Art (Tan et al. 2019). |
| Dataset Splits | No | The paper mentions using COCO and Wiki Art datasets but does not explicitly provide details on train/validation/test splits with specific percentages or sample counts for model training within the main text. It states 'Refer to the Supplementary for more implementation details.' |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper states 'Refer to the Supplementary for more implementation details' but does not specify software dependencies (e.g., library names with version numbers like Python 3.8, PyTorch 1.9) within the main text. |
| Experiment Setup | Yes | So far, the total loss for training G is summarized as LG = Ls + λc Lc + λadv LGadv, (6) where the trade-off parameters λc and λadv are set to 2 and 10 respectively in our experiments. |