KeDuSR: Real-World Dual-Lens Super-Resolution via Kernel-Free Matching
Authors: Huanjing Yue, Zifan Cui, Kun Li, Jingyu Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three datasets demonstrate that our method outperforms the second-best method by a large margin. Our code and dataset are available at https://github.com/Zifan Cui/Ke Du SR. |
| Researcher Affiliation | Academia | 1School of Electrical and Information Engineering, Tianjin University, China 2College of Intelligence and Computing, Tianjin University, China {huanjing.yue, cuizifan, lik, yjy}@tju.edu.cn |
| Pseudocode | No | The paper describes its methods in prose and with diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and dataset are available at https://github.com/Zifan Cui/Ke Du SR. |
| Open Datasets | Yes | Our code and dataset are available at https://github.com/Zifan Cui/Ke Du SR. Therefore, we construct the first well-aligned Du SR-Real dataset, where the HR is aligned with the real captured LR and the reference is also real captured. The LR and Ref have overlapped Fo V regions. In addition, we reorganize the previous dual-lens SR datasets and construct another two real datasets, namely Real MCVSR-Real and Camere Fusion Real, for comprehensively evaluation. |
| Dataset Splits | No | Among the remaining triples (ILR,IRef,IHR), 420 triples are used for training, and 55 triples are used for testing. The paper only explicitly states training and testing splits, without a distinct validation split. |
| Hardware Specification | Yes | All experiments were conducted using Py Torch (Paszke et al. 2019) on an Nvidia Ge Force RTX 3090 GPU. |
| Software Dependencies | No | All experiments were conducted using Py Torch (Paszke et al. 2019) on an Nvidia Ge Force RTX 3090 GPU. The paper mentions PyTorch but not its version number, nor other software dependencies with versions. |
| Experiment Setup | Yes | During training, the batch size is 4, and the patch size for the input LR is 128 128. We utilized the Adam optimizer (Kingma and Ba 2014) and the cosine annealing scheme (Loshchilov and Hutter 2016). The learning rate is initially set to 10 4 and is decayed to 10 6. |