Memory-oriented Decoder for Light Field Salient Object Detection

Authors: Miao Zhang, Jingjing Li, JI WEI, Yongri Piao, Huchuan Lu

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The success of our method is demonstrated by achieving the state of the art on three datasets. We present this problem in a way that is accessible to members of the community and provide a large-scale light field dataset that facilitates comparisons across algorithms. The code and dataset are made publicly available at https://github.com/OIPLab-DUT/MoLF. ... Extensive experiments on three light field datasets show that our method achieves consistently superior performance over 25 state-of-the-art 2D, 3D and 4D approaches. ... 5 Experiments ... To evaluate the performance of our proposed network, we conduct experiments on our proposed dataset and the only two public light field saliency datasets: LFSD [29] and HFUT [59].
Researcher Affiliation Academia Miao Zhang Jingjing Li Wei Ji Yongri Piao Huchuan Lu Dalian University of Technology, China miaozhang@dlut.edu.cn, {lijingjing, jiwei521}@mail.dlut.edu.cn, {yrpiao, lhchuan}@dlut.edu.cn
Pseudocode No The paper describes methods using text and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code and dataset are made publicly available at https://github.com/OIPLab-DUT/MoLF.
Open Datasets Yes To remedy the data deficiency problem, we introduce a large-scale light field saliency dataset with 1462 selected high-quality samples captured by Lytro Illum camera. ... The code and dataset are made publicly available at https://github.com/OIPLab-DUT/MoLF.
Dataset Splits No Ours: This dataset consists of 1462 light field samples. We randomly select 1000 samples for training and the remaining 462 samples for testing.
Hardware Specification Yes Our network is implemented on Pytorch framework and trained with a GTX 2080 Ti GPU.
Software Dependencies No The paper mentions 'Pytorch framework' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes All training and test images are uniformly resized to 256 256. Our network is trained in an end-to-end manner, in which the momentum, weight decay and learning rate are set to 0.9, 0.0005, 1e-10, respectively. During the training phrase, we use softmax entropy loss, and the network is trained by standard SGD and converges after 40 epochs with batch size of 1.