Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement
Authors: Chongyi Li, Chun-Le Guo, man zhou, Zhexin Liang, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We conduct a systematical analysis of existing LLIE methods on UHD data. 4 EXPERIMENTS Implementation. We implement our method with Py Torch and train it on six NVIDIA Tesla V100 GPUs. A batch size of 6 is applied. Images are randomly cropped into patches of size 512 512 for training. We include 14 state-of-the-art methods (21 models in total) for our benchmarking study and performance comparison. 4.3 ABLATION STUDY We present ablation studies to demonstrate the effectiveness of the main components in our design. |
| Researcher Affiliation | Academia | Chongyi Li1, Chun-Le Guo2, Man Zhou1, Zhexin Liang1, Shangchen Zhou1, Ruicheng Feng1, and Chen Change Loy1 1S-Lab, Nanyang Technological University 2Nankai University |
| Pseudocode | No | The paper includes network architecture diagrams (Figure 3, Figure 4) to illustrate the proposed UHDFour network and its components. However, it does not contain any structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code and dataset are available at https://lichongyi.github.io/UHDFour/. |
| Open Datasets | Yes | We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. The code and dataset are available at https://lichongyi.github.io/UHDFour/. |
| Dataset Splits | Yes | We split the UHD-LL dataset into two parts: 2,000 pairs for training and 115 pairs for testing. |
| Hardware Specification | Yes | We implement our method with Py Torch and train it on six NVIDIA Tesla V100 GPUs. |
| Software Dependencies | No | We implement our method with Py Torch and train it on six NVIDIA Tesla V100 GPUs. However, specific version numbers for PyTorch or other software dependencies are not provided. |
| Experiment Setup | Yes | Implementation. We implement our method with Py Torch and train it on six NVIDIA Tesla V100 GPUs. We use an ADAM optimizer for network optimization. The learning rate is set to 0.0001. A batch size of 6 is applied. We fix the channels of each Conv layer to 16, except for the Conv layers associated with outputs. We use the Conv layer with stride = 2 and 4 4 kernels to implement the 2 downsample operation in the encoder and interpolation to implement the 2 upsample operation in the decoder in the LRNet. Unless otherwise stated, the Conv layer uses stride = 1 and 3 3 kernels. We use the training data in the UHD-LL dataset to train our model. Images are randomly cropped into patches of size 512 512 for training. |