LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes

Authors: Zefan Qu, Ke Xu, Gerhard Hancke, Rynson Lau

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that Lu Sh-Ne RF outperforms existing approaches. We construct a new dataset containing both synthetic and real images. Experiments show that Lu Sh-Ne RF outperforms existing approaches. PSNR, SSIM, and LPIPS [63] metrics are used to evaluate the performance difference... Ablation Study. Fig. 6 demonstrates the effect of the various components of Lu Sh-Ne RF on a realistic scenario. We conduct a quantitative comparison of our method against various combinations of SOTA approaches on our synthesized data in Tab. 1.
Researcher Affiliation Academia Zefan Qu Ke Xu Gerhard Petrus Hancke Rynson W.H. Lau Department of Computer Science City University of Hong Kong zefanqu2-c@my.cityu.edu.hk, kkangwing@gmail.com, {gp.hancke, Rynson.Lau}@cityu.edu.hk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks explicitly labeled as 'Algorithm' or 'Pseudocode'.
Open Source Code Yes Our code and dataset can be found here: https://github.com/quzefan/Lu Sh-Ne RF.
Open Datasets Yes To facilitate training and evaluations, we construct a new dataset containing both synthetic and real images. Our code and dataset can be found here: https://github.com/quzefan/Lu Sh-Ne RF. Since we are the first to reconstruct Ne RF with hand-held low-light photographs, we build a new dataset based on the low-light image deblur dataset [67] for training and evaluation. Specifically, our dataset consists of 5 synthetic and 5 real scenes, for the quantitative and generalization evaluations. We use the COLMAP [42] method to estimate the camera pose of each image in the scenarios.
Dataset Splits Yes Table 2: The dataset split details for our proposed LOL-Blur Ne RF dataset. ... Scenario ... Collected Views ... Training Views ... Evaluation Views. This table clearly shows the number of training and evaluation views for each scene.
Hardware Specification Yes All the experiments in this paper are performed on a PC with an i9-13900K CPU and a single NVIDIA RTX3090 GPU.
Software Dependencies No The paper mentions implementing based on 'official code of Deblur-Ne RF [26]' and using 'Rigid Blurring Kernel network in [19]', and uses 'COLMAP [42]' and 'GIM [43]', but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The number of camera motions k and the frequency filter radius in the CTP module are set to 4 and 30. The number of aligned rays K and certainty threshold θ in the SND module are set to 20 and 0.8. Before training, the input images are up-scaled by gamma adjustment and histogram equalization. The batch size is set to 1,024 rays, with 64 fine and coarse sampled points. α and β are set to 1 and 0 during the first 60K iterations for better rendering results, to avoid the inaccuracy matching matrix M interfering with the SND module. The two hyper-parameters are then changed to 1 and 1 10 2 in the last 40K iterations.