PromptRestorer: A Prompting Image Restoration Method with Degradation Perception

Authors: Cong Wang, Jinshan Pan, Wei Wang, Jiangxin Dong, Mengzhu Wang, Yakun Ju, Junyang Chen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiment We evaluate Prompt Restorer on benchmarks for 4 image restoration tasks: (a) deraining, (b) deblurring, (c) desnowing, and (d) dehazing. We train separate models for different image restoration tasks. Our Prompt Restorer employs a 3-level encoder-decoder.
Researcher Affiliation Academia 1The Hong Kong Polytechnic University, 2Nanjing University of Science and Technology, 3Dalian University of Technology, 4Hebei University of Technology, 5Shenzhen University
Pseudocode No The paper describes the model architecture and equations (e.g., Equation 1, 2, 3, 4, 5, 6, 7, 8) and illustrates components in figures, but it does not include a formal pseudocode or algorithm block.
Open Source Code No The paper does not provide any concrete statement or link regarding the public availability of its source code.
Open Datasets Yes We evaluate Prompt Restorer on benchmarks for 4 image restoration tasks: (a) deraining, (b) deblurring, (c) desnowing, and (d) dehazing. We train separate models for different image restoration tasks. ... We evaluate deblurring results on both synthetic datasets (Go Pro [65], HIDE [76]) and real-world datasets (Real Blur-R [73], Real Blur-J [73]).
Dataset Splits No The paper refers to standard benchmark datasets (e.g., Go Pro, HIDE, Real Blur) which typically have predefined splits, but it does not explicitly state the training, validation, or test dataset splits (e.g., percentages or sample counts) within the paper.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory specifications) used for running its experiments.
Software Dependencies No The paper mentions using the 'Adam W optimizer' but does not specify version numbers for any programming languages, libraries, or other software dependencies.
Experiment Setup Yes Our Prompt Restorer employs a 3-level encoder-decoder. From level-1 to level-3, the number of CGT is [2, 3, 6], attention heads are [2, 4, 8], and number of channels is [48, 96, 192]. The expanding channel capacity factor β is 4. For downsampling and upsampling, we adopt pixel-unshuffle and pixel-shuffle [77], respectively. We train models with Adam W optimizer with the initial learning rate 3e 4 gradually reduced to 1e 6 with the cosine annealing [63]. The patch size is set as 256 256.