LAGConv: Local-Context Adaptive Convolution Kernels with Global Harmonic Bias for Pansharpening

Authors: Zi-Rong Jin, Tian-Jing Zhang, Tai-Xiang Jiang, Gemine Vivone, Liang-Jian Deng1113-1121

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superiority of the proposed method is demonstrated by extensive experiments implemented on a wide range of datasets compared with state-of-the-art pansharpening methods. Besides, more discussions testify that the proposed LAGConv outperforms recent adaptive convolution techniques for pansharpening.
Researcher Affiliation Academia 1 University of Electronic Science and Technology of China 2 School of Economic Information Engineering, Southwestern University of Finance and Economics 3 National Research Council Institute of Methodologies for Environmental Analysis
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It uses diagrams to illustrate the network architecture and operations.
Open Source Code Yes Besides, the code is available at https://github.com/liangjiandeng/LAGConv.
Open Datasets Yes All the source data can be downloaded from the public websites1 2. 1https://resources.maxar.com/ 2http://www.rscloudmart.com/data Product/sample
Dataset Splits Yes For WV3 data, we obtain 12580 PAN/MS/GT image pairs (70%/20%/10% as training/validation/testing datasets) with size 64 64 1, 16 16 8, and 64 64 8, respectively; For GF2 data, we use 10000 PAN/MS/GT image pairs (70%/20%/10% as training/validation/testing datasets) with size 64 64 1, 16 16 4, and 64 64 4, respectively; For QB data, 20000 PAN/MS/GT image pairs (70%/20%/10% as training/validation/testing datasets) with size 64 64 1, 16 16 4, and 64 64 4 are adopted.
Hardware Specification Yes The models are implemented with Py Torch on NVIDIA Ge Force GTX 2080Ti.
Software Dependencies No The paper mentions "Py Torch" and "Adam optimizer" but does not provide specific version numbers for these software components, which is required for reproducibility.
Experiment Setup Yes For the parameters of the proposed model, the number of the LCA-Res Blocks is set to 5, while the channels of the LAGConv and the kernel size are 32 and k k (with k = 3), respectively. Besides, we set 1000 epochs for the network training, while the learning rate is 1 10 3 in the first 500 epochs and 1 10 4 in the last 500 epochs. The FC layers used in the LAGConv consist of two dense layers with k2 neurons, and the FC layers in the GH bias consist of two dense layers with Cout neurons. Adam optimizer is used for training with a batch size equal to 32, while β1 and β2 are set to 0.9 and 0.999, respectively.