Coarse-to-Fine Embedded PatchMatch and Multi-Scale Dynamic Aggregation for Reference-Based Super-resolution
Authors: Bin Xia, Yapeng Tian, Yucheng Hang, Wenming Yang, Qingmin Liao, Jie Zhou2768-2776
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed AMSA achieves superior performance over state-of-the-art approaches on both quantitative and qualitative evaluations. |
| Researcher Affiliation | Academia | Bin Xia1, Yapeng Tian2, Yucheng Hang1, Wenming Yang1*, Qingmin Liao1, Jie Zhou3 1 Shenzhen International Graduate School / Department of Electronic Engineering, Tsinghua University 2 University of Rochester 3 Department of Automation, Tsinghua University |
| Pseudocode | Yes | The illustration of Embedded Patch Match is shown in Figure 3 (a). The layers of Embedded Patch Match are designed as follows: 1. Initialization layer. [...] 2. LR Propagation & Evaluation layer. [...] 3. Ref Propagation & Evaluation layer. [...] 4. Iteration. Repeat step 2 and step 3 M log( N/8) times. |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | We train and test our network on the CUFED5 (Zhang et al. 2019b) dataset. [...] Additionally, we test our network on the Sun80, Urban100, and Manga109 datasets. The Sun80 (Sun and Hays 2012) dataset [...] The Urban100 (Huang, Singh, and Ahuja 2015) dataset [...] Manga109 (Matsui et al. 2017) also lacks reference images... |
| Dataset Splits | No | The paper mentions training and testing sets, but does not specify details about a validation set or how data was split for validation. |
| Hardware Specification | Yes | The model is implemented by Py Torch on an NVIDIA 2080Ti GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number for the software dependency. |
| Experiment Setup | Yes | AMSA is trained and tested in a scale factor of 4 between the LR and HR image. We augment training data by randomly horizontally and vertically flipping followed by randomly rotating 90, 180, and 270. The model is optimized by ADAM optimizer with β1 = 0.9, β2 = 0.99 and initial learning rate of 1e-4. Each mini-batch includes 9 LR patches with size 40 40 along with 9 Ref patches with size 160 160. The weights for Lrec, Lper, and Ladv are 1.0, 10-4, and 10-6, respectively. For Coarse-to-Fine Embedded Patch Match, the iteration M of Embedded Patch Match on 1/8, 1/4, 1/2 and original scales are set to 1, 1, 2, and 6 times, separately. In addition, for the MSDA module of AMSA, we set the downsampled factor k and the number of downsampled Refi to 0.8 and 5, respectively. |