F³Net: Fusion, Feedback and Focus for Salient Object Detection
Authors: Jun Wei, Shuhui Wang, Qingming Huang12321-12328
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on five benchmark datasets demonstrate that F3Net outperforms state-of-the-art approaches on six evaluation metrics. To demonstrate the performance of F3Net, we report experiment results on five popular SOD datasets and visualize some saliency maps. We conduct a series of ablation studies to evaluate the effect of each module. |
| Researcher Affiliation | Academia | 1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 2University of Chinese Academy of Sciences, Beijing, 100049, China |
| Pseudocode | Yes | Algorithm 1: Cascaded Feedback Decoder |
| Open Source Code | Yes | Code will be released at https://github.com/weijun88/F3Net. Codes has been released. |
| Open Datasets | Yes | The performance of F3Net is evaluated on five popular datasets, including ECSSD (Yan et al. 2013) with 1000 images, PASCAL-S (Li et al. 2014) with 850 images, DUTOMRON (Yang et al. 2013) with 5168 images, HKU-IS (Li and Yu 2015) with 4,447 images and DUTS (Wang et al. 2017a) with 15,572 images. All datasets are human-labeled with pixel-wise ground-truth for quantitative evaluations. |
| Dataset Splits | No | The paper mentions DUTS-TR (training) and DUTS-TE (testing) splits but does not explicitly describe a separate validation split or its size/methodology. |
| Hardware Specification | Yes | An RTX 2080Ti GPU is used for acceleration. |
| Software Dependencies | Yes | We use Pytorch 1.3 to implement our model. |
| Experiment Setup | Yes | Maximum learning rate is set to 0.005 for Res Net-50 backbone and 0.05 for other parts. Warm-up and linear decay strategies are used to adjust the learning rate. The whole network is trained end-to-end, using stochastic gradient descent (SGD). Momentum and weight decay are set to 0.9 and 0.0005, respectively. Batchsize is set to 32 and maximum epoch is set to 32. |