MEnet: A Metric Expression Network for Salient Object Segmentation
Authors: Shulian Cai, Jiabin Huang, Delu Zeng, Xinghao Ding, John Paisley
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed metric can generate robust salient maps that allow for object segmentation. By testing the method on several public benchmarks, we show that the performance of MEnet achieves excellent results. |
| Researcher Affiliation | Academia | 1 Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, China 2 School of Mathematics, South China University of Technology, China 3 Department of Electrical Engineering, Columbia University, USA |
| Pseudocode | No | No explicitly labeled "Pseudocode" or "Algorithm" block was found in the paper. The architecture and process are described via text and figures. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code for their proposed method or a link to a code repository. |
| Open Datasets | Yes | The datasets we consider are: MSRA10K [Cheng et al., 2015], DUT-OMRON (DUT-O) [Yang et al., 2013], HKU-IS [Li and Yu, 2015], ECSSD [Yan et al., 2013], MSRA1K [Liu et al., 2011] and SOD [Martin et al., 2001]. |
| Dataset Splits | Yes | For MSRA10K, 8500 images for training, 500 images for validation and the MSRA1K for testing; HKU-IS was divided into approximately 80/5/15 training-validation-testing splits. |
| Hardware Specification | Yes | All experiments are performed on a PC with Intel(R) Xeon(R) CPU I7-6900k, 96GB RAM and GTX TITAN X Pascal (12G). |
| Software Dependencies | No | We use the Caffe software package to train our model [Jia et al., 2014]. However, no specific version number for Caffe or other software dependencies is provided. |
| Experiment Setup | Yes | We set the learning rate to 0.1 with weight decay of 10^-8, a momentum of 0.9 and a mini-batch size of 5. We train for 110,000 iterations. |