Proximal PanNet: A Model-Based Deep Network for Pansharpening
Authors: Xiangyong Cao, Yang Chen, Wenfei Cao176-184
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on some benchmark datasets show that our network performs better than other advanced methods both quantitatively and qualitatively. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Technology, Xi an Jiaotong University, Xi an 710049, China. 2 Ministry of Education Key Lab For Intelligent Networks and Network Security, Xi an 710049, China. 3 School of Mathematics and Statistics, Shaanxi Normal University, Xi an 710119, China. |
| Pseudocode | No | The paper describes the algorithm steps in text and visually in Figure 2, but does not provide a formal pseudocode block or algorithm listing. |
| Open Source Code | No | The paper does not provide any statement or link indicating that its source code is publicly available. |
| Open Datasets | No | We conduct several experiments using data from the Worldview3 satellite. Specifically, we extract 12.5K image pairs from the image of Worldview3... The ground truth HRMS image is obtained by using the Wald s protocol (Wald, Ranchin, and Mangolini 1997) due to its unavailability. |
| Dataset Splits | Yes | We split these image pairs into 90%/10% for training/testing. |
| Hardware Specification | Yes | We implement our network using Tensor Flow on a GTX 1080Ti GPU with 12 GB memory. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | In our network, all the convolutional kernel sizes are set as 8 8. The channel number of each convolution is set as 16. The stage number of our network is set as 2. Additionally, we adopt the Adam algorithm, and a decayed technique to set the learning rate, i.e., decayed by 0.9 every 50 epochs with a fixed initial learning rate 0.0001. Also, the epoch number is 100, and the mini-batch size is 64. |