Color Event Enhanced Single-Exposure HDR Imaging
Authors: Mengyao Cui, Zhigang Wang, Dong Wang, Bin Zhao, Xuelong Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of the proposed method.Extensive experiments on these datasets demonstrate the effectiveness of our method. |
| Researcher Affiliation | Collaboration | 1The University of Hong Kong 2Shanghai AI Laboratory 3Northwestern Polytechnical University |
| Pseudocode | No | The paper includes architectural diagrams and descriptions of methods, but no explicitly labeled pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper states that datasets are available at https://www.kaggle.com/datasets/mengyaocui/ce-hdr and mentions that 'In the supplementary material, we provide more details of implementation, datasets, and more comparison results'. However, there is no explicit statement or link confirming the release of the source code for the methodology described in the paper. |
| Open Datasets | Yes | The datasets can be found at https://www.kaggle.com/datasets/mengyaocui/ce-hdr.We make use of HDR-SYNTH and HDR-REAL datasets from the work of (Liu et al. 2020) to build our synthetic datasets. |
| Dataset Splits | Yes | HDR-SYNTH collects 562 HDR images, which are split into 503 HDR training images and 60 HDR evaluating images.In total, we collect 1000 HDR images, 9000 LDR images, and 1000 sets of color events, a tenth of them are split into evaluating images while the rest are split into training images. |
| Hardware Specification | No | The paper describes experimental setups and training details but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used for implementation or experimentation (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | The size of input images is (256, 256), while the size of input voxel bins is (128, 128). For over-/under-exposed image region selection, we empirically generate the raw mask with the threshold θ = 0.12. In the exposure-aware transformer, fk, sk are split into 8 8 patches... In the multi-head self-attention, the number of heads is 8 and the dropout rate is 0.1. The loss weight hyper-parameters are empirically set as: λ1 = 1, λ2 = 0.1, λ3 = 0.1, λ4 = 0.001. |