Unsupervised Domain Adaptive Salient Object Detection through Uncertainty-Aware Pseudo-Label Learning
Authors: Pengxiang Yan, Ziyi Wu, Mengmeng Liu, Kun Zeng, Liang Lin, Guanbin Li3000-3008
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets, and is even comparable to fully-supervised ones. |
| Researcher Affiliation | Collaboration | Pengxiang Yan1,2*, Ziyi Wu1*, Mengmeng Liu1, Kun Zeng1, Liang Lin1, Guanbin Li1,3 1Sun Yat-sen University 2Byte Dance Inc. 3Shenzhen Research Institute of Big Data |
| Pseudocode | No | The paper describes the proposed UDASOD framework and Uncertainty-Aware Pseudo-Label Learning strategy in detail, but it does not include a formal pseudocode block or a section explicitly labeled 'Algorithm'. |
| Open Source Code | No | The paper states 'More implementation details are provided in the supplemental materials.' but does not explicitly provide a link to source code or state that the code for their methodology is publicly available. |
| Open Datasets | Yes | During training, we adopt SYNSOD (11,197 images) as the source domain and the training set of DUTS (Wang et al. 2017) (10,533 images) as the target domain. As shown in Fig. 3, we present the following dataset statistics on our proposed SYNSOD dataset and five public benchmark SOD datasets (Wang et al. 2017; Yan et al. 2013; Yang et al. 2013; Li and Yu 2015; Li et al. 2014). |
| Dataset Splits | No | The paper states using SYNSOD as the source domain (training) and the training set of DUTS as the target domain (for pseudo-label learning). It evaluates on DUTS-TE and other benchmark datasets, but it does not explicitly provide a distinct validation dataset split or percentages for a validation set. |
| Hardware Specification | Yes | The whole training process takes about 20 hours with a batch size of 32 on a workstation with a NVDIA GTX 1080 GPU. |
| Software Dependencies | No | The paper mentions using ResNet-50-based LDF as their saliency detector and an SGD optimizer, but it does not specify software dependencies with version numbers (e.g., PyTorch version, CUDA version). |
| Experiment Setup | Yes | During training, we adopt SYNSOD (11,197 images) as the source domain and the training set of DUTS (Wang et al. 2017) (10,533 images) as the target domain. We set the total number of training rounds to six. ... We use an SGD optimizer and adopt the linear one cycle learning rate policy (Smith and Topin 2019) to schedule each training round. The whole training process takes about 20 hours with a batch size of 32... During testing, each image is resized to 352 352... We set k = 20 in our experiments. |