SpanConv: A New Convolution via Spanning Kernel Space for Lightweight Pansharpening

Authors: Zhi-Xuan Chen, Cheng Jin, Tian-Jing Zhang, Xiao Wu, Liang-Jian Deng

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate the proposed network significantly reduces parameters comparing with benchmark networks for remote sensing pansharpening, while achieving competitive performance and excellent generalization. 4 Experiments
Researcher Affiliation Academia University of Electronic Science and Technology of China, Chengdu, 611731 {zhixuan.chen, cheng.jin}@std.uestc.edu.cn, zhangtianjinguestc@163.com, wxwsx1997@gmail.com, liangjian.deng@uestc.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/ zhi-xuan-chen/IJCAI-2022 Span Conv.
Open Datasets Yes All DL networks are trained and tested on World View-3 dataset with eight bands and Quick Bird dataset with four bands which are available on the public website2. 2https://www.maxar.com/product-samples/, https://earth.esa.int/eogateway/catalog/quickbird-full-archive
Dataset Splits Yes After downloading these datasets, we use Wald s protocol to simulate 10000 PAN/MS/GT image pairs with sizes of 64 64, 16 16 8, and 64 64 8, respectively, and divide them into 90%/10% for training (9000 examples) and validation (1000 examples).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or specific machine configurations) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We exploit the ℓ1 distance between the network prediction and the ground truth (GT) image to supervise the reconstruction process. Besides, the Adam optimizer is utilized with a learning rate that decays by 0.75 every 120 epochs. The initial learning rate and training period are 0.0025 and 800 epochs, respectively.