Generative Image Inpainting with Submanifold Alignment

Authors: Ang Li, Jianzhong Qi, Rui Zhang, Xingjun Ma, Kotagiri Ramamohanarao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments We compare with four state-of-the-art inpainting models: CE [Pathak et al., 2016], GL [Iizuka et al., 2017], Gnt Ipt [Yu et al., 2018] and GMCNN [Wang et al., 2018]. We apply the proposed training strategy on both CE and GL GANarchitectures, and denote their improved versions as CE+LID and GL+LID respectively. We evaluate all models on four benchmark datasets: Paris Street View [Doersch et al., 2012], Celeb A [Liu et al., 2015], Places2 [Zhou et al., 2017] and Image Net [Russakovsky et al., 2015].
Researcher Affiliation Academia Ang Li , Jianzhong Qi , Rui Zhang , Xingjun Ma and Kotagiri Ramamohanarao The University of Melbourne angl4@student.unimelb.edu.au, {jianzhong.qi, rui.zhang, xingjun.ma, kotagiri}@unimelb.edu.au
Pseudocode No The paper describes methods textually and with mathematical formulas, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes We evaluate all models on four benchmark datasets: Paris Street View [Doersch et al., 2012], Celeb A [Liu et al., 2015], Places2 [Zhou et al., 2017] and Image Net [Russakovsky et al., 2015].
Dataset Splits No The paper mentions using benchmark datasets but does not explicitly provide specific details on how the data was split into training, validation, and test sets (e.g., percentages, sample counts, or explicit reference to predefined splits).
Hardware Specification Yes The training takes 16 hours on Paris Street View, 20 hours on Celeb A and one day on both Places2 and Image Net using an Nvidia GTX 1080Ti GPU.
Software Dependencies No The paper mentions using 'conv4_2 of VGG19' but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes We set λA to 0.01 as in [Yu et al., 2018; Iizuka et al., 2017]. Based on our analysis in Section 5.3, we empirically set λI = 0.01, λP = 0.1, k I = 8 and k P = 5 for our models. All training images are resized and cropped to 256 256. ... The size of a training batch is 64.