Self-supervised Learning and Adaptation for Single Image Dehazing
Authors: Yudong Liang, Bin Wang, Wangmeng Zuo, Jiaying Liu, Wenqi Ren
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our proposed method performs favorably against the state-of-the-art methods, and is quite efficient, i.e., handling a 4K image in 23 ms. |
| Researcher Affiliation | Academia | 1School of Computer and Information Technology, Institute of Big Data Science and Industry, Shanxi University, Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi, China 2School of Computer Science at Harbin Institute of Technology, Haerbin, China 3Wangxuan Institute of Computer Technology, Peking University, Beijing, China 4School of Cyber Science and Technology, Shenzhen Campus, Sun Yat-sen University, Shenzhen, China |
| Pseudocode | No | The paper describes its methods through text and diagrams (Figure 2) but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The codes are available at https://github.com/Dong Liang SXU/SLAdehazing. |
| Open Datasets | Yes | Real hazy images from URHI (Unannotated Real Hazy Images) of RESIDE [Li et al., 2018] are applied for self-supervised adaptation stage... Our method is evaluated on two synthetic datasets, i.e., SOTS and 4KID [Zheng et al., 2021]... Following [Li et al., 2021a], the SOTS of the RESIDE v0 dataset in our experiments contains SOTS-indoor and SOTS-outdoor dataset, i.e., 500 indoor hazy images and 500 outdoor hazy images. |
| Dataset Splits | Yes | Following [Li et al., 2021a], the SOTS of the RESIDE v0 dataset in our experiments contains SOTS-indoor and SOTS-outdoor dataset, i.e., 500 indoor hazy images and 500 outdoor hazy images. 4KID is a dataset containing large-size 4K (i.e., 3840 2160) synthetic hazy images established by [Zheng et al., 2021]. According to [Zheng et al., 2021], we randomly selected 200 hazy images in the dataset for testing. |
| Hardware Specification | Yes | experiments are conducted on a PC with one NVIDIA TITAN XP GPU. |
| Software Dependencies | Yes | The proposed method is implemented with Pytorch 1.4.0 |
| Experiment Setup | Yes | The models are trained by Adam optimizer with exponential decay rates β1 and β2 of 0.9 and 0.999, respectively. The initial learning rate and batch size are set to 0.0002 and 8, respectively. In the first training stage, the cosine annealing strategy is applied to adjust the learning rate and the total number of iterations is 50k. As for selfsupervised adaptation, only 5k iterations are required. |