DFVSR: Directional Frequency Video Super-Resolution via Asymmetric and Enhancement Alignment Network
Authors: Shuting Dong, Feng Lu, Zhe Wu, Chun Yuan
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments, 4.1 Datasets and Implementation, 4.2 Comparison with State-of-The-Art Methods, 4.3 Ablation Study, Table 1: Quantitative comparison (PSNR and SSIM) of different methods on REDS4, Vimeo-T, Vid4 and UDM10 with upscale factor 4 under BI and BD degradations. |
| Researcher Affiliation | Collaboration | Shuting Dong1,2 , Feng Lu1,2 , Zhe Wu2 and Chun Yuan1,2 1Tsinghua Shenzhen International Graduate School, Tsinghua University 2Peng Cheng Laboratory |
| Pseudocode | No | The paper describes its proposed network and modules using textual descriptions and architectural diagrams (Figure 1 and Figure 2), but it does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code for the methodology or a link to a code repository. |
| Open Datasets | Yes | We adopt two widely used datasets to train: REDS [Nah et al., 2019] and Vimeo-90K [Xue et al., 2019]. |
| Dataset Splits | Yes | Following [Chan et al., 2021a], we apply REDS4 as our test set, and REDSval4 as the validation set. |
| Hardware Specification | Yes | The model is trained under the Py Torch framework with an NVIDIA RTX 2080Ti GPU. |
| Software Dependencies | No | The paper mentions training under the 'PyTorch framework' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We employ Adam optimizer by setting β1 = 0.9 and β2 = 0.999. The learning rate is initialized as 2.5 10 5. We apply RGB patches of size 64 64 as inputs. We set the mini-batch size to 32. In addition to our proposed DFLoss, we also adopt Charbonnier loss [Lai et al., 2017], and ε is set to 1 10 3. The total number of iterations is 600K. |