Low-Light Video Enhancement with Synthetic Event Guidance

Authors: Lin Liu, Junfeng An, Jianzhuang Liu, Shanxin Yuan, Xiangyu Chen, Wengang Zhou, Houqiang Li, Yan Feng Wang, Qi Tian

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets. In this section, we conduct an ablation study and compare with state-of-the-art methods.
Researcher Affiliation Collaboration 1CAS Key Laboratory of Technology in GIPAS, EEIS Department, University of Science and Technology of China 2Independent Researcher 3Queen Mary University of London 4Huawei Noah s Ark Lab 5Huawei Cloud BU 6University of Macau 7Cooperative medianet innovation center of Shanghai Jiao Tong University 8Shenzhen Institute of Advanced Technology (SIAT)
Pseudocode No The paper describes the architecture and processes in detail but does not include formal pseudocode or algorithm blocks.
Open Source Code Yes Our code will be available at https://gitee. com/mindspore/models/tree/master/research/cv/LLVE-SEG.
Open Datasets Yes For a real low-light video dataset, we adopt SDSD (Wang et al. 2021) which contains 37,500 low- and normal-light image pairs with dynamic scenes. We also perform experiments on Vimeo90k (Xue et al. 2019).
Dataset Splits Yes For a fair comparison, we use the same training/test split as in (Wang et al. 2021). We finally get 9,477 training and 1,063 testing sequences, each with 7 frames, from Vimeo90k.
Hardware Specification Yes We implement our method in the Mind Spore (Mind Spore 2022) framework, and train and test it on two 3090Ti GPUs.
Software Dependencies Yes We implement our method in the Mind Spore (Mind Spore 2022) framework, and train and test it on two 3090Ti GPUs.
Experiment Setup Yes In the training stage, the patch size and batch size are 256 and 4, respectively. We adopt the Adam (Kingma and Ba 2015) optimizer with momentum set to 0.9. The input number of frames N is set to 5.