Pixel-Wise Warping for Deep Image Stitching

Authors: Hyeokjun Kweon, Hyeonseong Kim, Yoonsu Kang, Youngho Yoon, WooSeong Jeong, Kuk-Jin Yoon

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For training and evaluating the proposed framework, we build and publish a novel dataset including image pairs with corresponding pixel-wise ground truth warp and stitched result images. We show that the results of the proposed framework are qualitatively and quantitatively superior to those of the conventional methods.
Researcher Affiliation Academia Hyeokjun Kweon*, Hyeonseong Kim*, Yoonsu Kang*, Youngho Yoon*, Wooseong Jeong and Kuk-Jin Yoon Korean Advanced Institute of Science and Technology {0327june, brian617, gzgzys9887, dudgh1732, stk14570, kjyoon}@kaist.ac.kr
Pseudocode No The paper describes the algorithmic steps and components of their framework but does not present them in a structured pseudocode block or a section explicitly labeled 'Algorithm'.
Open Source Code No The paper states that they 'build and publish a novel dataset' but does not mention releasing the source code for their proposed methodology or framework.
Open Datasets Yes For training and evaluating the proposed framework, we build and publish a novel dataset including image pairs with corresponding pixel-wise ground truth warp and stitched result images. We utilize two distinct 3D virtual environments: the S2D3D dataset (Armeni et al. 2016) for indoor scenes and the CARLA simulator (Dosovitskiy et al. 2017) for outdoor scenes.
Dataset Splits No The paper mentions using a 'validation set' for the PDIS dataset ('We conduct the experiment on the validation set of PDIS dataset'), but it does not specify the exact split percentages or sample counts for training, validation, or test sets needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using 'Adam W optimizer (Loshchilov and Hutter 2017)' and 'layer relu2 2 from the VGG16 (Simonyan and Zisserman 2014) pre-trained on Image Net (Krizhevsky, Sutskever, and Hinton 2012)', and that their PWM borrows 'the architecture of (Teed and Deng 2020)'. However, it does not provide specific version numbers for any of these software components, libraries, or frameworks.
Experiment Setup Yes We use Adam W optimizer (Loshchilov and Hutter 2017) (β1 = 0.5, β2 = 0.999, and lr=1e-4), with batch size of 8. For perceptual loss (Eq. 8), we use layer relu2 2 from the VGG16 (Simonyan and Zisserman 2014) pre-trained on Image Net (Krizhevsky, Sutskever, and Hinton 2012).