Frequency-Adaptive Pan-Sharpening with Mixture of Experts

Authors: Xuanhua He, Keyu Yan, Rui Li, Chengjun Xie, Jie Zhang, Man Zhou

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative and qualitative experiments over multiple datasets demonstrate that our method performs the best against other state-of-the-art ones and comprises a strong generalization ability for real-world scenes.
Researcher Affiliation Academia 1Hefei Institutes of Physical Science, Chinese Academy of Sciences 2University of Science and Technology of China 3Nanyang Technological University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks labeled
Open Source Code Yes Code will be made publicly at https://github.com/alexhe101/FAME-Net.
Open Datasets Yes Our experiments were conducted on three typical datasets, namely World View-II (WV2), Gaofen2 (GF2), and World View-III (WV3), which include various natural and urban scenarios. Since ground truth is unavailable, we follow the Wald protocol (Wald, Ranchin, and Mangolini 1997) to generate training samples.
Dataset Splits No The paper mentions generating training samples and evaluating on specific datasets, but does not explicitly detail training, validation, and test dataset splits with percentages or counts.
Hardware Specification Yes We trained our model using the Python framework on an RTX 3060 GPU, with four experts (n=4), k=2 for each MOE, and a total of 1000 epochs.
Software Dependencies No The paper mentions using the
Experiment Setup Yes We trained our model using the Python framework on an RTX 3060 GPU, with four experts (n=4), k=2 for each MOE, and a total of 1000 epochs. We used the Adam optimizer with a learning rate of 5e-4 and linearly decreased the weight of the loss (α) during training. For training samples, we cropped LRMS patches of size 32x32 and PAN patches of size 128x128 from the images, with a batch size of 4.