Reflection Separation using a Pair of Unpolarized and Polarized Images
Authors: Youwei Lyu, Zhaopeng Cui, Si Li, Marc Pollefeys, Boxin Shi
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experimental ResultsWe evaluate our method on both synthetic and real data with extensive experiments including the comparison with related work and ablation study. For all quantitative evaluations, both the peaksignal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) are used to evaluate the quality of separated images. |
| Researcher Affiliation | Academia | 1Beijing University of Posts and Telecommunications 2Department of Computer Science, ETH Zürich 3National Engineering Laboratory for Video Technology, Peking University 4Peng Cheng Laboratory |
| Pseudocode | No | The paper describes the network architecture and physical model using text and equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and test data are available at https://github.com/Youwei Lyu/reflection_separation_with_un-polarized_images. |
| Open Datasets | Yes | At the first step, we randomly pick two images from PLACE2 dataset [26] as original reflection and transmission layers. |
| Dataset Splits | No | We use 5000 pairs of images from our synthetic validation dataset with ground truth reflection and transmission layers to quantitatively compare our method with state-of-the-art approaches. |
| Hardware Specification | No | The paper mentions using a “Lucid Vision Phoenix polarization camera” for data capture but does not provide specific hardware details (GPU/CPU models, memory) used for running the experiments or training the models. |
| Software Dependencies | No | We implement our model using Py Torch deep learning framework [17]. Adam [11] is used as the optimizer with a starting learning rate of 0.0004, β1 = 0.9 and β2 = 0.999. |
| Experiment Setup | Yes | Adam [11] is used as the optimizer with a starting learning rate of 0.0004, β1 = 0.9 and β2 = 0.999. The learning rate is descended to 0.0002 and 0.00008 after 12th and 18th epochs respectively. λ1,2,3,4 are set to be 1.2, 1.5, 1.0, and 1.5 respectively for our training. |