MMPN: Multi-supervised Mask Protection Network for Pansharpening

Authors: Changjie Chen, Yong Yang, Shuying Huang, Wei Tu, Weiguo Wan, Shengna Wei

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on simulated and real satellite datasets show that our method is superior to state-of-the-art methods both subjectively and objectively.
Researcher Affiliation Academia Changjie Chen1 , Yong Yang2 , Shuying Huang3 , Wei Tu4 , Weiguo Wan5 and Shengna Wei2 1School of Information Management, Jiangxi University of Finance and Economics, Nanchang, China 2School of Computer Science and Technology, Tiangong University, Tianjin, China 3School of Software, Tiangong University, Tianjin, China 4School of Mathematics and Computer Science, Jiangxi Science and Technology Normal University, Nanchang, China 5School of Software and Internet of Things Engineering, Jiangxi University of Finance and Economics, Nanchang, China
Pseudocode No The paper describes the proposed method using diagrams and mathematical formulas but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The code is available at github.com/sharpeningNN/MMPN.
Open Datasets Yes Experiments on simulated and real satellite datasets, including IKONOS (4 bands), Pl eiades (4 bands), and World View-3 (8 bands)
Dataset Splits No The paper states that deep learning methods are 'retrained using the same datasets' and mentions image sizes ('The sizes of LRMS and PAN images are 64 64 and 256 256, respectively.'), but it does not specify the explicit train/validation/test dataset splits (e.g., percentages, sample counts, or specific predefined split references).
Hardware Specification Yes all deep learning-based methods are retrained using the same datasets for fairness, and tested on the environment of NVIDIA Ge Force RTX 3090 and INTEL 11700K.
Software Dependencies No The paper mentions using the 'ENVI tool' for classification but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup No The paper describes the network architecture and loss function but does not provide specific experimental setup details such as learning rates, batch sizes, number of epochs, or optimizer types for training.