SyFormer: Structure-Guided Synergism Transformer for Large-Portion Image Inpainting
Authors: Jie Wu, Yuchao Feng, Honghui Xu, Chuanmeng Zhu, Jianwei Zheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on two publicly available datasets, i.e., Celeb A-HQ and Places2, to qualitatively and quantitatively demonstrate the superiority of our model over state-of-the-arts. |
| Researcher Affiliation | Academia | 1Zhejiang University of Technology 2Zhejiang University |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | Two well-known datasets, i.e., Celeb A-HQ (Karras et al. 2018) and Place2 (Zhou et al. 2017), are used for the performance investigation. |
| Dataset Splits | Yes | The Celeb A-HQ data is split into training, validation, and test sets in a ratio of 24:1:5. We keep 220,000 and 5000 images from the original places2 sets for training and testing, respectively. |
| Hardware Specification | Yes | All experiments are conducted on two GPUs of RTX 3090 with a single 12G of video memory. |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number or other software dependencies with version numbers. |
| Experiment Setup | Yes | All experiments are conducted using Py Torch with a batch size of 8. Our model is optimized by Adam with a learning rate of 2 10 4. The hyper-parameters in Eq. (15) are set as λadv = 0.1, λrec = 40, λsty = 120, λper = 0.05 to generate the sensuously optimal results. |