Multi-Spectral Salient Object Detection by Adversarial Domain Adaptation

Authors: Shaoyue Song, Hongkai Yu, Zhenjiang Miao, Jianwu Fang, Kang Zheng, Cong Ma, Song Wang12023-12030

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show the effectiveness and accuracy of the proposed adversarial domain adaptation for the multi-spectral SOD.
Researcher Affiliation Academia 1Institute of Information Science, Beijing Jiaotong University, Beijing, China 2Department of Computer Science, University of Texas-Rio Grande Valley, Edinburg, TX, USA 3School of Electronic and Control Engineering, Chang an University, Xi an, China 4Department of Computer Science and Engineering, University of South Carolina, Columbia, SC, USA 5School of Computer Science and Technology, Tianjin University, Tianjin, China
Pseudocode No The paper describes the methodology using textual explanations and network diagrams (e.g., Fig. 4, Fig. 5) but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper states it will publicize a dataset ("we first collect and will publicize a large multi-spectral dataset"), but does not explicitly state that the source code for the described methodology will be made publicly available or provide a link to a code repository.
Open Datasets Yes MSRA-B (Liu et al. 2010), DUTS (Wang et al. 2017b), HKU-IS (Li and Yu 2015) are existing public RGB image datasets. The MSRA-B dataset (Jiang et al. 2013; Wang et al. 2017a) is specifically chosen as the source domain for experiments.
Dataset Splits Yes MSRA-B dataset includes 5000 RGB images... The dataset is divided into three parts as the ratio of 5:1:4 (training: 2500 images, validation: 500 images, testing: 2000 images) (Jiang et al. 2013). For the supervised scenario, we split the collected multi-spectral SOD dataset into training, validation and testing subsets as the ratio of 5:1:4 same as the split principle in the MSRA-B dataset (Jiang et al. 2013).
Hardware Specification Yes We implement our networks using Py Torch running on a single Tesla P40 GPU.
Software Dependencies No The paper mentions 'Py Torch' as the deep learning framework, but does not provide specific version numbers for it or any other key software dependencies.
Experiment Setup Yes During the training procedure, we set the batch size as 1. The stochastic gradient descent optimizer is adopted for training. We set the momentum as 0.99, weight decay as 0.0005. As for learning rate, we follow the setup in (Wada 2017), i.e., lr = 10 10 for those layers with bias = False, and 2 lr for the layers with bias = True. During training procedure of domain classifier (Discriminator), an ADAM optimizer is adopted, and the initial learning rate is 1 10 4.