LeNo: Adversarial Robust Salient Object Detection Networks with Learnable Noise

Authors: He Wang, Lin Wan, He Tang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental Results In this paper, we focus on both RGB-D and RGB based salient object detection. For RGB-D based salient object detection, we experiment on NJU2K (Ju et al. 2014a), NLPR (Peng et al. 2014), LFSD (Li et al. 2014a), SIP (Fan et al. 2020e), NJUD (Ju et al. 2014b), STEREO (Niu et al. 2012) and DUTS-D (Piao et al. 2019). We train on the training set of NJU2K and NLPR, their test set and other public datasets are all for testing.
Researcher Affiliation Academia He Wang1,2, Lin Wan1, He Tang1* 1School of Software Engineering, Huazhong University of Science and Technology 2School of Cyber Science and Engineering, Huazhong University of Science and Technology {hew, linwan, hetang}@hust.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/ssecv/Le No.
Open Datasets Yes For RGB-D based salient object detection, we experiment on NJU2K (Ju et al. 2014a), NLPR (Peng et al. 2014), LFSD (Li et al. 2014a), SIP (Fan et al. 2020e), NJUD (Ju et al. 2014b), STEREO (Niu et al. 2012) and DUTS-D (Piao et al. 2019).
Dataset Splits No The paper mentions 'We train on the training set of NJU2K and NLPR, their test set and other public datasets are all for testing.' and refers to 'validations set' in ablation studies, but does not provide specific percentages or sample counts for training, validation, or test splits needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We perform a total of three attacks, namely FGSM, PGD and ROSA. Their step sizes are 0.3, 0.04 and 0.1, respectively. The max interations of PGD and ROSA are chosen as 10 and 30. Our bound is set to 20 just like ROSA. ... we set λ to 0.1 in our experiments. ...initialize two noises, which are shaped as RC H (W/2) and RC (H/2) W . ...each element of it is initialized as 0.25. ...The network is trained with only clean images, but it performs well on both clean and adversarial images. The training process is divided into two phases. ...utilize SGD to train the network and alternately update these two parameters, obtaining θ1 n and θ1 w.