Pan-Sharpening with Customized Transformer and Invertible Neural Network

Authors: Man Zhou, Jie Huang, Yanchi Fang, Xueyang Fu, Aiping Liu3553-3561

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments over different kinds of satellite datasets demonstrate that our method outperforms state-of-the-art algorithms both visually and quantitatively with fewer parameters and flops. Further, the ablation experiments also prove the effectiveness of the proposed customized long-range transformer and effective invertible neural feature fusion module for pan-sharpening.
Researcher Affiliation Academia Man Zhou2, 1 *, Jie Huang1 *, Yanchi Fang3, Xueyang Fu1, Aiping Liu1 1University of Science and Technology of China, China 2Hefei Institute of Physical Science, Chinese Academy of Sciences, China 3University of Toronto, Canada
Pseudocode No No pseudocode or clearly labeled algorithm block is present in the paper.
Open Source Code No The paper does not provide an unambiguous statement about releasing code or a direct link to a source-code repository for the methodology described.
Open Datasets No The paper mentions 'World View II, World View III , and Gao Fen2' datasets and states 'the basic information of which are listed in supplementary materials,' but does not provide a direct link, DOI, or specific citation for public access to these datasets.
Dataset Splits No The paper states 'For each satellite, we have hundreds of image pairs, and they are divided into two parts for training and test,' but does not explicitly mention a validation set split or its details.
Hardware Specification Yes We implement all our networks in Py Torch framework on the PC with a single NVIDIA Ge Force GTX 2080Ti GPU.
Software Dependencies No The paper mentions 'Py Torch framework' but does not specify its version number or versions of other key software dependencies.
Experiment Setup Yes In the training phase, they are optimized by Adam optimizer over 1000 epochs with a learning rate of 8 10 4 and a batch size of 4. In the training set, the MS images are cropped into patches with the size of 128 128 , and the corresponding PAN patches are with the size of 32 32.