Interactive Portrait Harmonization
Authors: Jeya Maria Jose Valanarasu, HE Zhang, Jianming Zhang, Yilin Wang, Zhe Lin, Jose Echevarria, Yinglan Ma, Zijun Wei, Kalyan Sunkavalli, Vishal Patel
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic and real-world datasets show that the proposed approach is efficient and robust compared to previous harmonization baselines, especially for portraits. |
| Researcher Affiliation | Collaboration | Jeya Maria Jose Valanarasu1 , He Zhang2, Jianming Zhang2, Yilin Wang2, Zhe Lin2, Jose Echevarria2, Yinglan Ma2, Zijun Wei2, Kalyan Sunkavalli2, Vishal M. Patel1 1 Johns Hopkins University, 2 Adobe Research |
| Pseudocode | No | No pseudocode or algorithm block was found in the paper. |
| Open Source Code | Yes | The code can be found here: https://github.com/jeya-maria-jose/Interactive-Portrait-Harmonization |
| Open Datasets | Yes | Publicly available datasets like i Harmony4 Cong et al. (2020) were proposed for background harmonization and do not provide any reference region information. So, we curate a synthetic dataset and also introduce a real-world portrait harmonization dataset for validating. 1) Int Harmony: ... Int Harmony is built on top of MS-COCO dataset Lin et al. (2014). |
| Dataset Splits | No | The number of training images in Int Harmony is 118287 and 959 images are allocated for testing. No explicit mention of a validation split percentage or count was found for any dataset. |
| Hardware Specification | Yes | Our framework is developed in Pytorch Paszke et al. (2019) and the training is done using NVIDIA RTX 8000 GPUs. |
| Software Dependencies | No | The paper mentions 'Pytorch Paszke et al. (2019)' but does not provide a specific version number for it or any other software dependency. |
| Experiment Setup | Yes | We use an Adam optimizer Kingma & Ba (2014) with a learning rate of 10 4, 10 5, 10 6 at each stage respectively. The batch size is set equal to 48. The images are resized to 256 256 while training. |