R³Net: Recurrent Residual Refinement Network for Saliency Detection
Authors: Zijun Deng, Xiaowei Hu, Lei Zhu, Xuemiao Xu, Jing Qin, Guoqiang Han, Pheng-Ann Heng
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed R3Net on five widely-used saliency detection benchmarks by comparing it with 16 stateof-the-art saliency detectors. Experimental results show that our network outperforms our competitors in all the benchmark datasets. |
| Researcher Affiliation | Academia | 1 South China University of Technology, 2 The Chinese University of Hong Kong, 3 The Hong Kong Polytechnic University, 4 Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China |
| Pseudocode | No | The paper describes the network architecture and equations but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code, trained model and more results are publicly available at https://github.com/zijundeng/R3Net. |
| Open Datasets | Yes | Our R3Net is trained on the MSRA10K dataset [Cheng et al., 2015], which is widely used for training the saliency models [Lee et al., 2016; Zhang et al., 2017a]. |
| Dataset Splits | No | The paper mentions deep supervision during training but does not explicitly provide details about a specific validation dataset split (e.g., percentages or counts for a validation set). |
| Hardware Specification | No | Our R3Net is trained on a single GPU with a mini-batch size of 14, and it takes only 80 minutes to train the network. No specific model or type of GPU is provided. |
| Software Dependencies | No | The paper mentions using 'Res Ne Xt network on Image Net' and 'stochastic gradient descent (SGD)' but does not provide specific version numbers for software dependencies like programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | We use the stochastic gradient descent (SGD) to train the network with the momentum of 0.9 and the weight decay of 0.0005, set the basic leaning rate as 0.001, adjust the learning rate by the poly policy [Liu et al., 2015] with the power of 0.9, and stop the training procedure after 6k iterations. ... trained on a single GPU with a mini-batch size of 14, and it takes only 80 minutes to train the network. ... we empirically set all the weights (including w0 and wi) as 1, and set the hyper-parameter n as 6 by balancing the time performance and the detection accuracy. |