Unfolding the Alternating Optimization for Blind Super Resolution

Authors: zhengxiong luo, Yan Huang, Shang Li, Liang Wang, Tieniu Tan

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the-art methods and produce more visually favorable results at much higher speed.
Researcher Affiliation Academia Zhengxiong Luo1,2,3, Yan Huang1,2, Shang Li2,3, Liang Wang1,4,5, and Tieniu Tan1,4 1 Center for Research on Intelligent Perception and Computing (CRIPAC) National Laboratory of Pattern Recognition (NLPR) 2 Institute of Automation, Chinese Academy of Sciences (CASIA) 3 School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS) 4 Center for Excellence in Brain Science and Intelligence Technology (CEBSIT) 5 Chinese Academy of Sciences, Artificial Intelligence Research (CAS-AIR)
Pseudocode No The paper provides architectural diagrams of the network modules but no pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at https://github.com/greatlog/DAN.git.
Open Datasets Yes We collect 3450 HR images from DIV2K [1] and Flickr2K [11] as training set.
Dataset Splits No The paper details training and testing data but does not explicitly provide information on validation set splits or its usage in the training process.
Hardware Specification Yes All models are trained on RTX2080Ti GPUs.
Software Dependencies No The paper mentions software like Adam for optimization but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The input size during training is 64 64 for all scale factors. The batch size is 64. Each model is trained for 4 10^5 iterations. We use Adam [22] as our optimizer, with β1 = 0.9, β2 = 0.99. The initial learning rate is 2 10^4, and will decay by half at every 1 10^5 iterations.