Learning to dehaze with polarization

Authors: Chu Zhou, Minggui Teng, Yufei Han, Chao Xu, Boxin Shi

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our approach achieves state-of-the-art performance on both synthetic data and real-world hazy images. To verify the validity of each model design choice, we conduct a series of ablation studies and show comparisons in Table 2.
Researcher Affiliation Academia 1Key Lab of Machine Perception (MOE), Dept. of Machine Intelligence, Peking University 2Nat l Eng. Lab for Video Technology, School of Computer Science, Peking University 3Institute for Artificial Intelligence, Peking University 4Beijing Academy of Artificial Intelligence 5School of Info. and Comm. Eng., Beijing University of Posts and Telecommunications
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., a specific repository link or an explicit statement of code release) for its source code.
Open Datasets Yes The Foggy Cityscapes-DBF dataset [72] meets the above two requirements, so we use the provided z, R, and S to generate our synthetic dataset.
Dataset Splits No The paper states, 'The images are resized and randomly cropped to 240 240 patches during the training process, and cropped to 496 240 patches for test6.' While it describes training and testing sets, it does not explicitly mention a distinct validation dataset split or its size/percentage.
Hardware Specification Yes We implement our network using Py Torch on an NVIDIA 2080Ti GPU and apply a two-phase training strategy.
Software Dependencies No The paper mentions using 'Py Torch' but does not specify its version number or provide version details for other software dependencies required to replicate the experiment.
Experiment Setup Yes ADAM optimizer [26] is used with an initial learning rate 5 10 4 for the first 300 epochs, and a linear decay to 2.5 10 4 in the next 100 epochs. Then, we finetune the entire network in an end-to-end manner for another 300 epochs, keeping the learning rate to 5 10 4. Instance normalization [90] are added during training.