Few-Cost Salient Object Detection with Adversarial-Paced Learning

Authors: Dingwen Zhang, HaiBin Tian, Jungong Han

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on four widely-used benchmark datasets demonstrate that the proposed method can effectively approach to the existing supervised deep salient object detection models with only 1k human-annotated training images.
Researcher Affiliation Academia 1School of Mechano-Electronic Engineering, Xidian University, Xi an, Shaanxi 710071 2Computer Science Department, Aberystwyth University, Ceredigion, SY23 3FL
Pseudocode No The paper refers to "Alg. 1 and Alg. 2" as showing the optimization pipeline, but these algorithm blocks are not present in the provided text.
Open Source Code Yes The project page is available at https://github.com/hb-stone/FC-SOD.
Open Datasets Yes We use four widely-used benchmark datasets to implement the experiments, which include PASCALS [41], DUT-O [42], SOD [43], and DUTS [28]. Following the previous works [44, 20, 45], we use the training split of the DUT-S dataset for training and test the trained models on the other datasets.
Dataset Splits No Following the previous works [44, 20, 45], we use the training split of the DUT-S dataset for training and test the trained models on the other datasets. The paper specifies training and test splits but does not explicitly detail a separate validation split or its size/proportion.
Hardware Specification Yes We implement the proposed algorithm on the Py Torch framework using a NVIDIA GTX 1080Ti GPU.
Software Dependencies No We implement the proposed algorithm on the Py Torch framework using a NVIDIA GTX 1080Ti GPU. The paper mentions PyTorch but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes When training the saliency network, we use the Stochastic Gradient Descent (SGD) optimization method, where the momentum is set to 0.9, and the weight decay is set to 5 10 4. The initial learning rates of the taskpredictor and the pace-generator are 2.5 10 4 and 10 4, respectively, which are decreased with polynomial decay parameterized by 0.9. For training the pace network, we adopt the Adam optimizer [46] with the learning rate 10 4. The same polynomial decay as the saliency network is also used. We set beta = 0.01 and eta = 0.7 according to a heuristic grid search process. our method uses in total 24.5K iterations and the loss and performance curves are shown in Fig. 2.