Bottom-up and Top-down: Bidirectional Additive Net for Edge Detection
Authors: Lianli Gao, Zhilong Zhou, Heng Tao Shen, Jingkuan Song
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our proposed method can improve the edge detection performance to new records and achieve state-of-the-art results on two public benchmarks: BSDS500 and NYUDv2. The ablation study also verifies the effect of each component. |
| Researcher Affiliation | Academia | Lianli Gao , Zhilong Zhou , Heng Tao Shen and Jingkuan Song Center for Future Media and School of Computer Science and Technology, University of Electronic Science and Technology of China lianli.gao@uestc.edu.cn, {zhilong.zhou1996, jingkuan.song}@gmail.com, shenhengtao@hotmail.com |
| Pseudocode | No | The paper describes the proposed architecture and modules in text and diagrams (Figure 4, Figure 5) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a specific repository link, an explicit code release statement) for the source code of the described methodology. |
| Open Datasets | Yes | To evaluate our proposed method, we use two commonly used benchmark datasets: BSDS500 [Arbelaez et al., 2011] and NYUDv2 [Silberman et al., 2012]. |
| Dataset Splits | Yes | Follow [Xie and Tu, 2017] [Liu et al., 2019], we train our network on training and validation sets and test our method on test set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using VGG16 pre-trained on ImageNet and SGD optimization, but does not provide specific version numbers for any software components, libraries, or frameworks. |
| Experiment Setup | Yes | The hyper-parameter λd in Equation 10 is set as 300. The C, N and L are set to 21, 2 and 5 in Section 2. Following [Liu et al., 2019], λ is set to 1.1 for BSDS500 and 1.2 for NYUDv2 respectively. We employ SGD optimization during training. The learning rate, weight decay, momentum and batch size are set to 1e-7, 0.9, 2e-4 and 10, respectively. We train BSDS500 for 15k steps and NYUDv2 for 30k steps because of the different sizes of training set. |