Uncertainty-Driven Dehazing Network
Authors: Ming Hong, Jianzhuang Liu, Cuihua Li, Yanyun Qu906-913
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on synthetic datasets and real-world images show that UDN achieves significant quantitative and qualitative improvements, outperforming state-of-the-arts. Experiments Implementation Details. We implement UDN in the PyTorch 1.2.0 framework with an NVIDIA RTX 2080 GPU. |
| Researcher Affiliation | Collaboration | 1 Xiamen University, 2 Huawei Noah s Ark Lab mingh@stu.xmu.edu.cn, liu.jianzhuang@huawei.com, chli@xmu.edu.cn, yyqu@xmu.edu.cn |
| Pseudocode | No | The paper describes the proposed methods and their components using prose and illustrative diagrams (Figure 2, 3, 4, 6), but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | Source codes (implemented in Mind Spore and Pytorch) will be released later. |
| Open Datasets | Yes | We evaluate the proposed method on the RESIDE dataset [Li et al. 2018] which contains an Indoor Training Set (ITS), an Outdoor Training Set (OTS) and a Synthetic Objective Testing Set (SOTS). Specifically, ITS contains 13,990 synthetic hazy images generated from NYU Depth V2 [Nathan Silberman and Fergus 2012], OTS contains 313,950 synthetic hazy images generated from 8,970 outdoor scenes, and SOTS consists of an indoor test set and an outdoor test set, which includes 1,000 indoor/outdoor hazy images generated from 100 different indoor/outdoor scenes. Besides, we evaluate our model trained with ITS on the Middlebury dataset [Cosmin Ancuti 2016] which contains 23 hazy images generated from high-quality real scenes. We also give a quantitative evaluation on a real-world dataset O-HAZE [Ancuti et al. 2018], which contains 45 pairs of outdoor scenes recorded in haze-free and hazy conditions. |
| Dataset Splits | No | The paper states using ITS for training and SOTS indoor test set for evaluation, and mentions the number of images in these datasets. It also refers to retraining on the O-HAZE training set. However, it does not provide explicit training, validation, and test *splits* (e.g., percentages or counts from a single dataset) necessary for reproduction, other than implicitly using predefined splits of public datasets or distinct datasets for training and testing. |
| Hardware Specification | Yes | We implement UDN in the PyTorch 1.2.0 framework with an NVIDIA RTX 2080 GPU. |
| Software Dependencies | Yes | We implement UDN in the PyTorch 1.2.0 framework with an NVIDIA RTX 2080 GPU. |
| Experiment Setup | Yes | We use batch-size of 2 and a patch-size of 256x256 pixels for training. Samples are augmentated by random rotation and horizontally flipping. Adam optimizer is used with an initial learning rate of 0.0001 and is scheduled by cosine decay [Athiwaratkun et al. 2019]. The model is trained for 300 epoches. The parameters M and N are both set to 6, which means we use 6 UDMs in UDN and each contains 6 UFMs. All convolutional layers have C = 64 channels. Besides, we empirically set λp = 1, λu = 0.1, S = 10, T = 5, and q = 10. |