Deep Light-field-driven Saliency Detection from a Single View

Authors: Yongri Piao, Zhengkun Rong, Miao Zhang, Xiao Li, Huchuan Lu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our dataset is randomly split into 1100 for training and 480 for testing. We discover that our saliency detection network is able to outperform all the previous methods on the proposed dataset. We conducted performance evaluations on the proposed dataset. We compare our method with 7 state-of-the-art 2D saliency detection methods. Table 1 shows the quantitative results in terms of S-measure, F-measure, E-measure and MAE. Our proposed method achieves the best results compared with other methods. To verify the importance of our feature extraction technique, we evaluate the proposed network without the rich saliency feature extraction technique, named RSF . We show the quantitative comparison of our method with MAVM in Table 2.
Researcher Affiliation Academia 1School of Information and Communication Engineering, Dalian University of Technology, Dalian, China 2Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian University of Technology, Dalian, China 3DUT-RU International School of Information and Software Engineering, Dalian University of Technology, Dalian, China {yrpiao@, rzk911113@mail, miaozhang@, lixiaohao@mail, lhchuan@}dlut.edu.cn
Pseudocode No The paper describes its methods through textual explanations and network diagrams (Figures 3, 4, 5) but does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes The source code and light field saliency detection dataset can be found at https://github.com/OIPLab-DUT/.
Open Datasets Yes We collected the largest available light field dataset for saliency detection, containing 1580 light fields, divided into 1100 for training and 480 for testing, captured by the Lytro Illum camera. The source code and light field saliency detection dataset can be found at https://github.com/OIPLab-DUT/.
Dataset Splits No The paper states: "Our dataset is randomly split into 1100 for training and 480 for testing." It does not explicitly mention a separate validation set or split.
Hardware Specification Yes We implement our method based on the Tensorflow toolbox with one NVIDIA 1080Ti GPU.
Software Dependencies No The paper states: "We implement our method based on the Tensorflow toolbox". However, it does not provide specific version numbers for Tensorflow or any other software dependencies.
Experiment Setup Yes We train the whole network end-to-end using the Adam optimization algorithm with default parameters β1= 0.9, β2= 0.999, and ε= 1e 08, respectively. For light field synthesis, the learning rate is 0.0001, weight of consistency regularization loss λc and total variation regularization loss λtv are 0.01 and 0.001 respectively. For saliency detection, the learning rate is 0.001. The minibatch size is set to 1 and the maximum iteration is set to 200000.