Deep Low-Contrast Image Enhancement using Structure Tensor Representation

Authors: Hyungjoo Jung, Hyunsung Jang, Namkoo Ha, Kwanghoon Sohn1725-1733

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide indepth analysis on our method and comparison with conventional loss functions. Quantitative and qualitative evaluations demonstrate that the proposed method outperforms the existing state-of-the art approaches in various benchmarks.
Researcher Affiliation Collaboration 1 Yonsei University 2 Korea Institute of Science and Technology (KIST) 3 LIG Nex1
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No Tensor Flow library with 12GB NIVIDIA Titan GPU is used for network construction and training (our code will be made publicly available).
Open Datasets Yes Our enhancement method requires the multi-exposure image sequences to train the network DCNN. Recently, large-scale multi-exposure image dataset (Cai, Gu, and Zhang 2018) has been constructed including both indoor and outdoor scenes. We trained our network using 7 multi-exposure sequences for each image from (Cai, Gu, and Zhang 2018), which covers most of the exposure levels.
Dataset Splits No The paper states: "We randomly cropped 5 × 104 patches with 128 × 128 size from our training dataset, and trained our network using the patches." It does not specify distinct training, validation, or test splits with percentages or counts for reproduction.
Hardware Specification Yes The Tensor Flow library with 12GB NIVIDIA Titan GPU is used for network construction and training.
Software Dependencies No The paper mentions "The Tensor Flow library" but does not specify a version number.
Experiment Setup Yes The loss function of (4) is minimized with the Adam solver (Kingma and Ba 2014) (β1 = 0.9, β2 = 0.999, and ϵ = 10 −8 setting). We randomly cropped 5 × 104 patches with 128 × 128 size from our training dataset, and trained our network using the patches. The learning rate was initialized as 10 −3 and halved every 10 epoches until 100 epoches.