ReFIR: Grounding Large Restoration Models with Retrieval Augmentation

Authors: Hang Guo, Tao Dai, Zhihao Ouyang, Taolin Zhang, Yaohua Zha, Bin Chen, Shu-Tao Xia

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Re FIR can achieve not only high-fidelity but also realistic restoration results.
Researcher Affiliation Collaboration Hang Guo1 Tao Dai 2 Zhihao Ouyang3 Taolin Zhang1 Yaohua Zha1 Bin Chen4 Shu-tao Xia1,5 1Tsinghua University 2Shenzhen University 3Aitist.ai 4Harbin Institute of Technology 5Peng Cheng Laboratory
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes https://github.com/csguoh/Re FIR
Open Datasets Yes The datasets for this setting employ the widely used Ref SR dataset including CUFED5 [56, 57] and WR-SR [47]... And we use DIV2K [58] as the high-quality image database for retrieval...
Dataset Splits No The paper uses standard datasets like CUFED5 and WR-SR and mentions Real Photo60 for evaluation, but does not explicitly provide training/validation/test dataset splits, percentages, or methodologies for how these splits were created or used for their specific experiments.
Hardware Specification Yes We use an input image with the resolution of 2048 × 2048 to evaluate the GPU memory and the inference time on one single 80G NVIDIA A100 GPU.
Software Dependencies No The paper does not explicitly provide specific software dependencies with version numbers.
Experiment Setup Yes For a fair comparison, we use one reference image if not specified... the ILQ is up-sampled to the desired size using Bicubic... We use reflective padding... We use fixed random seeds... The hyperparameters of different baselines follow their original settings... In practice, we adopt a moderate s = 0.5 to trade off the hallucination and the overuse of the reference image.