Seeing Dark Videos via Self-Learned Bottleneck Neural Representation

Authors: Haofeng Huang, Wenhan Yang, Lingyu Duan, Jiaying Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the robustness and superior effectiveness of our proposed method. Our project is publicly available at: https://huangerbai.github.io/SLBNR/. We provide quantitative results in Table 1 and Table 2. We conduct ablation studies as shown in Table 3.
Researcher Affiliation Academia 1Peking University, Beijing, China, 2Peng Cheng Laboratory, Beijing, China hhf@pku.edu.cn, yangwh@pcl.ac.cn, lingyu@pku.edu.cn, liujiaying@pku.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our project is publicly available at: https://huangerbai.github.io/SLBNR/.
Open Datasets Yes The evaluation dataset is commonly used DRV (Chen et al. 2019) which provides dynamic videos of a real dark scene.
Dataset Splits No The paper describes how an 'evaluation set' was chosen but does not specify a separate 'validation' dataset split with percentages, counts, or a detailed splitting methodology for reproducibility of model training.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes The training starts with a 300-epochs self-regression then continue with a fully-equipped loss for another 300 epochs. We choose λ1=100, λ2=10 4, λ3=10 3, λ4=1, λ5=1.