Cross-Scale Domain Adaptation with Comprehensive Information for Pansharpening

Authors: Meiqi Gong, Hao Zhang, Hebaixu Wang, Jun Chen, Jun Huang, Xin Tian, Jiayi Ma

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various satellites demonstrate the superiority of our method over the state-of-the-arts in terms of information retention. Our code is publicly available at https://github.com/Meiqi-Gong/SDIPS. and 3 Experiments 3.1 Experimental Settings Datasets. We conduct experiments on three satellites: Quick Bird (QB), Gao Fen-2 (GF2) and World View-II (WV2).
Researcher Affiliation Academia Meiqi Gong1, Hao Zhang1, Hebaixu Wang1, Jun Chen2, Jun Huang1, Xin Tian1 and Jiayi Ma1 1Electronic Information School, Wuhan University, Wuhan 430072, China 2School of Automation, China University of Geosciences, Wuhan 430074, China
Pseudocode Yes Algorithm 1: The training process of SDIPS
Open Source Code Yes Our code is publicly available at https://github.com/Meiqi-Gong/SDIPS.
Open Datasets No Datasets. We conduct experiments on three satellites: Quick Bird (QB), Gao Fen-2 (GF2) and World View-II (WV2). ... To expand the training dataset, we crop the FMS images to sizes of 80 80 4/80 80 8, and correspondingly crop the FPAN images to a size of 320 320. Subsequently, all images are downsampled by 4 times following the Wald s protocol to obtain images at the reduced-resolution scale, with sizes of 20 20 4/20 20 8 for MS images and 80 80 for PAN images. We obtain a total of 10000 pairs of images for the training set by applying rotations, adding noise, and other means.
Dataset Splits No The paper describes training and testing dataset sizes and methods for their creation but does not explicitly mention a separate validation dataset split.
Hardware Specification Yes Experiments are implemented on the Py Torch platform using a 3.5-GHz Intel Core i9-9920X CPU and NVIDIA Titan RTX GPU.
Software Dependencies No The paper mentions 'Py Torch platform' but does not specify its version number or any other software dependencies with their versions.
Experiment Setup Yes Training Details. The overall training process is reported in Algorithm 1. We first train the pansharpening network with the learning rate 5e-4 for 40 epochs. The network is optimized by minimizing Lps using the Adam optimizer with a decay rate of 0.95 every five epochs. After that, the pansharpening network, the reconstruction network and DSB are jointly optimized by minimizing Lall using the Adam optimizer with a decay rate of 0.95 every epoch, while the learning rate is set as 1e-4 and this part continues for 10 epochs. As the pansharpening network and the reconstruction network are equally important to train the EN block, λ1 and λ2 are set as 1. η and σ are set as 0.5 and 0.1 to minimize the potential performance degradation of the pansharpening network. α in the PRELU activation function is set as 0.25.