Efficient Residual Dense Block Search for Image Super-Resolution

Authors: Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang12007-12014

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate the effectiveness of the proposed searching method and the found efficient super-resolution models achieve better performance than the state-of-the-art methods with limited number of parameters and FLOPs.
Researcher Affiliation Collaboration 1Huawei Noah s Ark Lab. 2 Huawei CBG 3School of Computer Science, Faculty of Engineering, The University of Sydney
Pseudocode Yes Algorithm 1 Guided evolutionary algorithm
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes Most widely used dataset DIV2K (Timofte et al. 2017) is adopted in this paper.
Dataset Splits Yes It consists of 800 training images and 100 validation images. During the evolution and retrain procedures, the SR models are trained with DIV2K training set and evaluated with validation set.
Hardware Specification Yes The evolution procedure is performed on single Tesla V100 with 8 GPUs and spends about one day.
Software Dependencies No The paper mentions using ADAM optimizer but does not specify version numbers for any software dependencies like programming languages or libraries.
Experiment Setup Yes The number of children is 16 which is composed of 8 children mutated from elitism and 8 children crossovered with parents. The number of generation G is 40 and the mutation probability is 0.2. The coefficient α is 0.9 and constant ϵ is 0.001. Coefficient η(t) is updated every 10 epochs. ηL is initialized with 0.0625 and multiplied 2 every period. All of the other coefficients ηl are 1 ηL L 1 . To enhance the difference of block credits, we employ the square of block credit to guide the mutation, i.e. pselect(bj) = c2 n(j)/ Nb j=1 c2 n(j). For the phenotype, the maximum block number is 20 and the minimum active block number is 5. During evolution, we crop 32 32 RGB patches from LR image as input for training. We train each model for 60 epoch with a mini-batch of size 16. The evolution procedure is performed on single Tesla V100 with 8 GPUs and spends about one day. ... Our models are trained with ADAM optimizer with setting β1 = 0.9, β2 = 0.999, and the initialized learning rate is 10 4. The learning rate decreases half for every 300 epochs during the whole 1000 training epoch.