Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Deep Unfolded Network with Intrinsic Supervision for Pan-Sharpening

Authors: Hebaixu Wang, Meiqi Gong, Xiaoguang Mei, Hao Zhang, Jiayi Ma

AAAI 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the advantages of our method compared to state-of-the-arts, showcasing its remarkable generalization capability to real-world scenes.
Researcher Affiliation Academia Electronic Information School, Wuhan University, Wuhan 430072, China EMAIL, EMAIL
Pseudocode No Following the framework of half-quadratic splitting (HQS) (Sun et al. 2020), two auxiliary variables U and V are introduced to reformulate Eq. (2): arg min H,U,V 1 2 L DBH 2 2 + η1 2 U H 2 2 + η2 2 V H 2 2 + λ2 2 Ω2(P, V ), (3) where η1, η2, λ1 and λ2 are penalty parameters. To achieve the unrolling inference, Eq. (3) can be divided into the following three sub-problems and solved alternatively: U (k) = arg min U η1 U H(k) 2 2 + η2 Tp P Th U 2 2, (4) V (k) = arg min V λ1 V H(k) 2 2 + λ2 DBV IP 2 2, (5) H(k+1) = arg min H 1 2 L DBH 2 2 + η1 U (k) H 2 2 V (k) H 2 2, (6)
Open Source Code Yes Our code is publicly available at https://github.com/Baixuzx7/DISPNet.
Open Datasets Yes Extensive experiments are conducted over three satellite datasets, namely Gao Fen-2, Quickbird and World View-II. ... In the training stage, the reduced image pairs are treated as inputs, while H is regarded as a reference.
Dataset Splits No In the training stage, the reduced image pairs are treated as inputs, while H is regarded as a reference.
Hardware Specification Yes All the experiments are conducted on a desktop with 2.6GHz AMD EPYC 7H12, NVIDIA Ge Force RTX 3090.
Software Dependencies No The implementation is based on the Pytorch framework.
Experiment Setup Yes For optimization, the learning rate is set to 1 10 4. The Adam optimizer is employed with to update the network parameters for 600 epochs with the batch size of 16. The number of unfolding stages is K = 4, other coefficients are α=0.1, β =1, γ =0.01, ρ=0.1 and λ= 10.