Densely Cascaded Shadow Detection Network via Deeply Supervised Parallel Fusion
Authors: Yupei Wang, Xin Zhao, Yin Li, Xuecai Hu, Kaiqi Huang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method is evaluated on two widely used shadow detection benchmarks. Experimental results show that our method outperforms state-of-the-arts by a large margin. |
| Researcher Affiliation | Academia | 1 CRIPAC, NLPR, Institute of Automation, Chinese Academy of Sciences 2 University of Chinese Academy of Sciences 3 Carnegie Mellon University 4 University of Science and Technology of China 5 CAS Center for Excellence in Brain Science and Intelligence Technology |
| Pseudocode | No | The paper describes the model architecture and processes using text and diagrams, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include links to a code repository. |
| Open Datasets | Yes | We evaluate our method on two widely used benchmarks: SBU [Vicente et al., 2016] and UCF [Zhu et al., 2011]. |
| Dataset Splits | No | The paper states 'we train our models on SBU training set, and evaluate the trained models on SBU testing set and UCF testing set,' but does not explicitly mention a separate validation set or provide details on its split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper states 'All our models are trained using Caffe [Jia et al., 2014] as backend,' but does not provide specific version numbers for Caffe or any other software dependencies. |
| Experiment Setup | Yes | The hyperparameters, including the initial learning rate, weight decay and momentum, are set to 1e-8, 2e-4 and 0.9, respectively. Our DSPF network is initialized from the trained HED network. And our DC-DSPF is further trained on top of DSPF. The hyper-parameters of DC-DSPF are set to 1e-8, 2e-4 and 0.99 respectively for the initial learning rate, weight decay and momentum. All new convolutional layers are initialized with Gaussian random distribution with fixed mean (0.0) and variance(0.01). We apply random flipping for data augmentation during training. |