Sub-Band Based Attention for Robust Polyp Segmentation
Authors: Xianyong Fang, Yuqing Shi, Qingqing Guo, Linbo Wang, Zhengyi Liu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section reports some experimental results. For the complete codes and more experimental settings, results and ablation studies, please check the supplementary document. |
| Researcher Affiliation | Academia | Xianyong Fang , Yuqing Shi , Qingqing Guo , Linbo Wang and Zhengyi Liu School of Computer Science and Technology, Anhui University fangxianyong@ahu.edu.cn, e21201044@stu.ahu.edu.cn, guoqingad@sina.com, {wanglb, liuzywen}@ahu.edu.cn |
| Pseudocode | No | The paper contains figures illustrating module structures and mathematical formulations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | For the complete codes and more experimental settings, results and ablation studies, please check the supplementary document. |
| Open Datasets | Yes | Five colonoscopy image datasets are adopted, including ETIS [Silva et al., 2014], CVC-Clinic DB [Bernal et al., 2015], CVC-Colon DB [Tajbakhsh et al., 2015], CVC300 [V azquez et al., 2017] and Kvasir [Jha et al., 2020]. |
| Dataset Splits | No | The paper states: 'The training set contains 1450 images selected from Kvasir [Jha et al., 2020] and CVC-Clinic DB [Bernal et al., 2015] with all the left images taken as testing image.' This describes training and test splits, but no explicit validation split is detailed or mentioned with specific counts or percentages. |
| Hardware Specification | Yes | SBA-Net is implemented in Py Torch with the CUDA library, a Ge Force RTX 3090 Ti GPU and an Intel Core i712700KF Processor. |
| Software Dependencies | No | The paper mentions 'SBA-Net is implemented in Py Torch with the CUDA library,' but does not provide specific version numbers for PyTorch, CUDA, or other key software dependencies. |
| Experiment Setup | Yes | Adam optimizer is adopted with the learning rate 5e-5. Batch size is set to 8 with the epoch 80. |