Generative Image Inpainting with Segmentation Confusion Adversarial Training and Contrastive Learning
Authors: Zhiwen Zuo, Lei Zhao, Ailin Li, Zhizhong Wang, Zhanjie Zhang, Jiafu Chen, Wei Xing, Dongming Lu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on two benchmark datasets, demonstrating our model s effectiveness and superiority both qualitatively and quantitatively. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Zhejiang University {zzwcs, cszhl, liailin, endywon, cszzj, chenjiafu, wxing, ldm}@zju.edu.cn |
| Pseudocode | No | The paper describes its methods using prose and mathematical equations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about open-sourcing the code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We train and evaluate our method on two benchmark datasets Places2 (Zhou et al. 2017) and Celeb A (Liu et al. 2018b) following their official training/validation splits. |
| Dataset Splits | Yes | We train and evaluate our method on two benchmark datasets Places2 (Zhou et al. 2017) and Celeb A (Liu et al. 2018b) following their official training/validation splits. |
| Hardware Specification | Yes | We train our model with a batch size of 8 on a single 24G NVIDIA RTX3090 GPU. |
| Software Dependencies | No | The paper mentions software components like 'U-net' and 'AOTblocks' and techniques like 'hinge loss' and 'spectral normalization', but it does not specify any programming languages, libraries, or solvers with their corresponding version numbers. |
| Experiment Setup | Yes | We train our model with a batch size of 8 on a single 24G NVIDIA RTX3090 GPU. All the masks and images for training and evaluation are of size 256 256. The negative sample size for the semantic contrastive learning loss is 8. We conduct experiments on the Celeb A dataset to select the hyper-parameters from a set of empirical values, i.e. [0.1, 1, 5, 10], and find that setting λadv = 1, λtext = 10, λsem = 1, and λrec = 10 works fine for our model. |