ESNet: Evolution and Succession Network for High-Resolution Salient Object Detection

Authors: Hongyu Liu, Runmin Cong, Hua Li, Qianqian Xu, Qingming Huang, Wei Zhang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five datasets demonstrate that our network achieves superior performance at real-time speed (49 FPS) compared to state-of-the-art methods.
Researcher Affiliation Academia 1Institute of Information Science, Beijing Jiaotong University & Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing, China 2School of Control Science and Engineering, Shandong University & Key Laboratory of Machine Intelligence and System Control, Ministry of Education, Jinan, China 3 School of Computer Science and Technology, Hainan University, Hainan, China 4Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS, Beijing, China 5School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China.
Pseudocode No The paper describes algorithms and methods in textual form and provides architectural diagrams, but it does not include structured pseudocode blocks or algorithm listings.
Open Source Code Yes Our code is publicly available at: https: //github.com/rmcong/ESNet_ICML24.
Open Datasets Yes The HRSOD (Zeng et al., 2019) and DUTS (Wang et al., 2017) datasets are explicitly mentioned for training: “the training set of HRSOD and DUTS datasets are used for training.”
Dataset Splits No The paper mentions using training and testing datasets (HRSOD-TR/TE, DUTS-TR/TE, DAVIS-SOD, UHRSD-TR/TE) but does not specify a separate validation dataset split with proportions, counts, or explicit usage for hyperparameter tuning.
Hardware Specification Yes We implement the proposed ESNet by Pytorch and conduct experiments on a single NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions using “Pytorch” and “Mind Spore Lite tool” but does not provide specific version numbers for these software components or any other libraries.
Experiment Setup Yes A SGD optimizer with momentum of 0.9 and weight decay of 0.0005 is used here. The batch size is set to 48 and the training epoch is 80. Warm-up and linear decay learning rate strategy are used with the maximum learning rate of 0.005 for the pre-trained Res Net50 feature extraction backbone and 0.05 for the rest of the network.