Salient Object Detection with Semantic Priors

Authors: Tam V. Nguyen, Luoqi Liu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further evaluate the proposed framework on two challenging datasets, namely, ECSSD and HKUIS. The extensive experimental results demonstrate that our method outperforms other state-of-the-art methods.
Researcher Affiliation Academia Tam V. Nguyen Department of Computer Science University of Dayton tamnguyen@udayton.edu Luoqi Liu Department of ECE National University of Singapore liuluoqi@u.nus.edu
Pseudocode No The paper describes the steps of the algorithm in text and provides a pipeline diagram (Figure 1), but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper mentions 'our unoptimized Matlab code' but does not provide a statement of release or a link to its source code.
Open Datasets Yes In particular, we utilize the CRF-FCN model trained from the PASCAL VOC 2007 dataset [Everingham et al., 2010] with 20 semantic classes. We trained our SP framework on HKUIS dataset [Li and Yu, 2015] (training part) which contains 4, 000 pairs of images and groundtruth maps.
Dataset Splits No The paper states that the HKUIS dataset has a 'training part' and a 'testing part', but does not explicitly describe a separate validation dataset split with specific percentages or sample counts.
Hardware Specification Yes The average time is taken on a PC with Intel i7 2.6 GHz CPU and 8GB RAM with our unoptimized Matlab code.
Software Dependencies No The paper mentions using 'CRF-FCN', 'random forest regressor', and 'Matlab code', but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For the implementation, we adopt the extension of FCN, namely CRF-FCN [Zheng et al., 2015], to perform the semantic segmentation for the input image. In particular, we utilize the CRF-FCN model trained from the PASCAL VOC 2007 dataset [Everingham et al., 2010] with 20 semantic classes. We trained our SP framework on HKUIS dataset [Li and Yu, 2015] (training part) which contains 4, 000 pairs of images and groundtruth maps. For the image over-segmentation, we adopt the method of [Achanta et al., 2012]. We set the number of regions as 200 as a trade-off between the fine over-segmentation and the processing time.