Deep Embedding Features for Salient Object Detection

Authors: Yunzhi Zhuge, Yu Zeng, Huchuan Lu9340-9347

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five benchmark datasets demonstrate that our method outperforms state-of-the-art results. Our proposed method is end-to-end and achieves a realtime speed of 38 FPS.
Researcher Affiliation Academia Yunzhi Zhuge, Yu Zeng, Huchuan Lu Dalian University of Technology {zgyz, zengyu}@mail.dlut.edu.cn, lhchuan@dlut.edu.cn
Pseudocode No The paper includes architectural diagrams but no structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes We train our model using DUTS training dataset (Wang et al. 2017b). DUTS (Wang et al. 2017a) is a large dataset which is composed of 10553 training images and 5019 test images with accurate pixel-wise annotations.
Dataset Splits No The paper mentions training and test sets for DUTS but does not specify a separate validation dataset or its split information. While some training aspects (like disarraying ground truth) are discussed, they don't relate to a distinct validation split.
Hardware Specification Yes We run our approach on a PC with a 3.7GHz CPU, 32GB RAM and a GTX 1080 Ti GPU (with 11G memory).
Software Dependencies No We implement our approach in Python with the Pytorch toolbox. The paper mentions Python and PyTorch but does not provide specific version numbers for either.
Experiment Setup Yes Input images are resized to 256 256 to match the size requirements of base network. We use SGD to optimize our network with the momentum parameter of 0.9 and the weight decay of 0.001. We set the base learning rate to 1e-7 and iteration number to 30K. It takes around 7 hours to train our model with a mini-batch of 10.