Quality-Improved and Property-Preserved Polarimetric Imaging via Complementarily Fusing

Authors: Chu Zhou, Yixing Liu, Chao Xu, Boxin Shi

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our framework achieves state-of-the-art performance.
Researcher Affiliation Academia Chu Zhou1 Yixing Liu2,3 Chao Xu4 Boxin Shi2,3* 1National Institute of Informatics, Japan 2State Key Laboratory for Multimedia Information Processing, School of CS, Peking University, China 3National Engineering Research Center of Visual Technology, School of CS, Peking University, China 4National Key Laboratory of General Artificial Intelligence, School of IST, Peking University, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes]
Open Datasets No We propose to generate a synthetic dataset due to the fact that there is no public dataset for our settings. First, we choose the PLIE dataset [32] as our data source. It provides short-exposure polarized snapshots that suffer from low-light noise along with the corresponding high-quality reference snapshots captured by a Lucid Vision Phoenix polarization camera, which could serve as L and I. Then, we adopt the approach proposed in [33] to generate the blurry polarized snapshots that suffer from motion blur, which could be served as B. The generated synthetic dataset is not explicitly stated to be publicly available, nor is a link or citation provided for it.
Dataset Splits No The paper specifies training and test sets but does not explicitly mention a separate validation dataset split.
Hardware Specification Yes Our framework is implemented using Py Torch with 2 NVIDIA 2080Ti GPUs
Software Dependencies No Our framework is implemented using Py Torch... The version number for PyTorch is not specified.
Experiment Setup Yes Loss function. The total loss function can be written as L = Ls + Lp + Lr... where λa,b s are set to be 10.0 and 0.05 respectively... where λa,b,c p are set to be 1.0, 0.15, and 1.0 respectively... where λa,b r are set to be 10.0 and 100.0 respectively... Training strategy. ...we train the irradiance restoration phase and the polarization reconstruction phase independently for 300 epochs with learning rates of 0.01 and 0.0001 respectively. Then, we train the entire network for 100 epochs with learning rate of 0.0001, and in this training stage we multiply the loss terms Ls,p,r with 5.0, 10.0, and 10.0 respectively. For optimization, we use Adam optimizer [12] with β1 = 0.5, β2 = 0.999.