Progressive Feature Polishing Network for Salient Object Detection
Authors: Bo Wang, Quan Chen, Min Zhou, Zhiqiang Zhang, Xiaogang Jin, Kun Gai12128-12135
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical experiments show that our results are monotonically getting better with increasing number of FPMs. Without bells and whistles, PFPN outperforms the state-of-the-art methods significantly on five benchmark datasets under various evaluation metrics. |
| Researcher Affiliation | Collaboration | Bo Wang,1,2 Quan Chen,2 Min Zhou,2 Zhiqiang Zhang,2 Xiaogang Jin,1 Kun Gai2 1State Key Lab of CAD&CG, Zhejiang University 2Alibaba Group |
| Pseudocode | No | The paper provides a formal formulation as an equation (Eq. 1) and an illustration of a FPM block in Figure 3, but does not present explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at: https://github.com/chenquan-cq/PFPN. |
| Open Datasets | Yes | We conduct experiments on five well-known benchmark datasets: ECSSD, HKU-IS, PASCAL-S, DUT-OMRON and DUTS. ECSSD (Yan et al. 2013)... HKU-IS (Li and Yu 2015)... PASCAL-S (Li et al. 2014)... DUT-O (Yang et al. 2013)... DUTS (Wang et al. 2017a)... |
| Dataset Splits | No | The paper states training on the DUTS training set and uses data augmentation but does not specify a separate validation dataset split with proportions or counts. The dataset description for DUTS only mentions training and testing splits: '10,553 for training and 5,019 for testing'. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments, such as GPU models, CPU types, or cloud computing instances. |
| Software Dependencies | No | The paper states 'We implement our method with Pytorch (Adam et al. 2017) framework,' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use Adam optimizer to train our model without evaluation until the training loss convergences. The initial learning rate is set to 1e-4 and the overall training procedure takes about 16000 iterations. For testing, the images are scaled to 256x256 to feed into the network and then the predicted saliency maps are bilinearly interpolated to the size of the original image. |