Noise-Free Optimization in Early Training Steps for Image Super-resolution

Authors: MinKyu Lee, Jae-Pil Heo

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed method can effectively enhance the stability of vanilla training, leading to overall performance gain. Codes are available at github.com/2minkyulee/ECO.
Researcher Affiliation Academia Min Kyu Lee, Jae-Pil Heo* Sungkyunkwan University {bluelati98, jaepilheo}@g.skku.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Procedures are described in narrative text.
Open Source Code Yes Codes are available at github.com/2minkyulee/ECO.
Open Datasets Yes We validate the effectiveness of our method on benchmark datasets: Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2010), BSD100 (Martin et al. 2001), Urban100 (Huang, Singh, and Ahuja 2015) and Manga109 (Matsui et al. 2017).
Dataset Splits No The paper mentions 'Validation results' and uses benchmark datasets that typically have predefined splits, but it does not explicitly provide specific details like percentages, sample counts, or explicit statements about standard split usage for reproducibility in the main text.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions software components like ReLU and implicitly deep learning frameworks, but it does not specify exact version numbers for any key software components or libraries (e.g., Python version, PyTorch version, CUDA version).
Experiment Setup Yes Figure 3 shows 'Ours (lr=1e-4) Ours (lr=2e-4) KD (lr=1e-4) Vanilla (lr=1e-4)', indicating specific learning rates. Figure 4 shows results across 'mini-batch sizes of 2, 4, 8, and 16'.