Local-Global Transformer Enhanced Unfolding Network for Pan-sharpening

Authors: Mingsong Li, Yikun Liu, Tao Xiao, Yuwen Huang, Gongping Yang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experimental results on three satellite data sets demonstrate the effectiveness and efficiency of LGTEUN compared with state-of-the-art (SOTA) methods.
Researcher Affiliation Academia Mingsong Li1 , Yikun Liu1 , Tao Xiao1 , Yuwen Huang2 , and Gongping Yang1 1School of Software, Shandong University, Jinan, China 2School of Computer, Heze University, Heze, China
Pseudocode No The paper provides architectural diagrams and textual descriptions of the algorithm, but it does not include a formal pseudocode block or an algorithm box.
Open Source Code Yes The source code is available at https://github.com/ lms-07/LGTEUN.
Open Datasets Yes For the MS pan-sharpening, an 8-band MS data set acquired by the World View-3 sensor 2 and two 4-band MS data sets acquired by World View-2 2 and Gao Fen-2 sensors are adopted for experimental analysis. ... 2https://www.l3harris.com/all-capabilities/ high-resolution-satellite-imagery
Dataset Splits No The paper states: 'Each data set is further split into non-overlapping subsets for training (about 1000 Lr MS/PAN/GT image pairs) and testing (about 140 Lr MS/PAN/GT image pairs).', but it does not explicitly mention a distinct validation split.
Hardware Specification Yes All the experiments are conducted in Py Torch framework with a single NVIDIA Ge Force GTX 3090 GPU.
Software Dependencies No The paper mentions that 'All the experiments are conducted in Py Torch framework' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes The end-to-end training of LGTEUN is supervised by mean absolute error (MAE) loss between the network output ZK and the GT Hr MS image. It trains 130 epochs for the 8-band data set, and 1000 epochs for the two 4band data sets. The Adam optimizer with β1 = 0.9 and β2 = 0.999 is employed for model optimization, and the batch size is set as 4. The initial learning rate is 1.5 10 3, and decays by 0.85 every 100 epochs.