Progressive High-Frequency Reconstruction for Pan-Sharpening with Implicit Neural Representation

Authors: Ge Meng, Jingjia Huang, Yingying Wang, Zhenqi Fu, Xinghao Ding, Yue Huang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on different datasets demonstrate that our method outperforms existing approaches in terms of quantitative and visual measurements for high-frequency detail recovery.
Researcher Affiliation Academia 1School of Informatics, Xiamen University, China 2Institute of Artificial Intelligence, Xiamen University, China {mengg, huangjj, wangyingying7, fuzhenqi}@stu.xmu.edu.cn, {dxh, yhuang2010}@xmu.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We use three satellite image datasets in our experiments, including World View-II, World View-III and Gao Fen2, each containing several hundred PAN-LRMS image pairs.
Dataset Splits No The paper states it uses the Wald protocol to *generate* training samples and mentions a "training set" and "testing outputs," but it does not specify explicit percentages, counts, or a detailed methodology for splitting these generated samples into training, validation, and test sets.
Hardware Specification Yes We implement our network on the PC with a single NVIDIA TITAN RTX 3090 GPU
Software Dependencies No The paper mentions building the network in the "Pytorch framework" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes The learning rate is set to 1 10 4 and the batch size is set to 4. The frequency of the signal at each input layer is initialized when entering BMHG so that the outputs y0, y1, y2, y4 are constrained to one eighth, quarter, half, and full bandwidth. Specifically, we set B0 = B1 = B/8, and B2 = B3 = B4 = B/4 such that P i Bi = B. We set the number of epochs for each stage of training to 100, 100, 200, and 200, respectively.