Towards Robust Image Stitching: An Adaptive Resistance Learning against Compatible Attacks

Authors: Zhiying Jiang, Xingyuan Li, Jinyuan Liu, Xin Fan, Risheng Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive evaluation across real-world and synthetic datasets validate the deterioration of So A on stitching performance. Furthermore, AAT emerges as a more robust solution against adversarial perturbations, delivering superior stitching results. Code is available at: https://github.com/Jzy2017/TRIS.
Researcher Affiliation Academia 1 Shool of Software Engineering, Dalian University of Technology 2 School of Mechanical Engineering, Dalian University of Technology
Pseudocode Yes Algorithm 1: So A based Perturbation Generation and Algorithm 2: So A based Adaptive Adversarial Training
Open Source Code Yes Code is available at: https://github.com/Jzy2017/TRIS.
Open Datasets Yes There are two benchmarks available for image stitching, including a synthetic dataset based on MS-COCO (Nie et al. 2020) and a real-world UDIS-D (Nie et al. 2021) collected from various moving videos.
Dataset Splits Yes For the training of homography estimation module, we employed the synthesized MSCOCO dataset for initial 120 epochs and fine-tuned on the training set of UDIS-D for 20 epochs. The reconstruction module is trained on UDIS-D for 30 epoch, adhering to the same hyperparameter configuration. For evaluation, the test set of UDIS-D is adopted, which contains 1106 image pairs. Moreover, we additionally obtained 62 pairs of real-world challenging cases (RWCC) from (Zhang et al. 2020; Lin et al. 2015; Chang, Sato, and Chuang 2014; Gao, Kim, and Brown 2011; Chen and Chuang 2016; Li et al. 2017) as comprehensive validation.
Hardware Specification Yes Both the training and testing are implemented on Pytorch with an NVIDIA Tesla A40 GPU.
Software Dependencies No The paper states 'implemented on Pytorch' but does not provide a specific version number for Pytorch or any other software dependencies with their versions.
Experiment Setup Yes The optimizer is Adam (Kingma and Ba 2014) with an initial learning rate of 1e 4 and the decay rate is 0.96. For adversarial attack, perturbation intensity ϵ is set as 8/255, the iteration count is 3, and the step size is 5/255. Both the training and testing are implemented on Pytorch with an NVIDIA Tesla A40 GPU. For the training of homography estimation module, we employed the synthesized MSCOCO dataset for initial 120 epochs and fine-tuned on the training set of UDIS-D for 20 epochs. The reconstruction module is trained on UDIS-D for 30 epoch, adhering to the same hyperparameter configuration.