DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep Inconsistency Prior

Authors: Shuangping Jin, Bingbing Yu, Minhao Jing, Yi Zhou, Jiajun Liang, Renhe Ji1104-1112

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative and qualitative results on the proposed benchmark show that DVN significantly outperforms other comparison algorithms in PSNR and SSIM, especially in extremely low light conditions.
Researcher Affiliation Collaboration 1 Megvii Technology 2 Southeast University 3 Dalian University Of Technology
Pseudocode No The paper describes the proposed DVN architecture and modules, but it does not include any explicit pseudocode blocks or sections labeled 'Algorithm'.
Open Source Code No The paper does not explicitly state that the source code for the proposed Dark Vision Net (DVN) methodology is publicly available, nor does it provide a link to a code repository.
Open Datasets Yes We also propose a new dataset called Dark Vision Dataset (DVD), consisting of aligned RGB-NIR image pairs, as the first public RGBNIR fusion benchmark.
Dataset Splits No The paper states 'we use 5k reference image pairs (256*256) as the training set. Another 1k reference image pairs (256*256) along with 10 additional real noisy image pairs (1920*1080) are used for testing.' It does not explicitly mention a separate validation dataset split.
Hardware Specification Yes All experiments are conducted on a device equipped with two 2080-Ti GPUs.
Software Dependencies No The paper mentions various techniques and algorithms used (e.g., Adam optimizer, Dice Loss, Charbonnier loss), but it does not specify software dependencies with version numbers (e.g., Python version, PyTorch/TensorFlow version, specific library versions).
Experiment Setup Yes Batchsize is set to 16. Training images are randomly cropped in the size of 128*128, and the value range is [0, 1]. We augment the training data following MPRNet (Zamir et al. 2021), including random flipping and rotating. Adam optimizer with momentum terms (0.9, 0.999) is applied for optimization. The whole network is trained for 80 epochs, and the learning rate is gradually reduced from 2e-4 to 1e-6. λ in function F is set to 0.5 for all configurations.