Pyramid Attention Aggregation Network for Semantic Segmentation of Surgical Instruments

Authors: Zhen-Liang Ni, Gui-Bin Bian, Guan-An Wang, Xiao-Hu Zhou, Zeng-Guang Hou, Hua-Bin Chen, Xiao-Liang Xie11782-11790

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed PAANet is evaluated on the Cata7 dataset and the MICCAI Endo Vis 2017 dataset. It achieves a new record of 97.10% m IOU on Cata7 and comes first in the MICCAI Endo Vis Challenge 2017 with 9.90% increase on m IOU.
Researcher Affiliation Academia 1University of Chinese Academy of Sciences, Beijing 100049, China 2State Key Laboratory of Management and Control for Complex Systems Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. 3CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing 100190, China {nizhenliang2017, guibin.bian, wangguanan2015, xiaohu.zhou, zengguang.hou, chenhuabin2019, xiaoliang.xie}@ia.ac.cn
Pseudocode No The paper includes architectural diagrams but no explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability for the described methodology.
Open Datasets Yes MICCAI Endo Vis 2017 Dataset Endo Vis 2017 is from the MICCAI Endovis Challenge 2017 (Allan et al. 2019).
Dataset Splits No The paper specifies training and test splits for the datasets (e.g., '1800 frames for training and 700 frames for the test' for Cata7, and '1800 images for training and 1200 images for the test' for Endo Vis 2017), but does not explicitly provide details for a separate validation split.
Hardware Specification No The paper mentions 'limited computing resources' but does not provide specific hardware details such as GPU/CPU models or memory.
Software Dependencies No The paper states 'Our network is implemented in Py Torch' and 'Adam is used as an optimizer' but does not provide specific version numbers for PyTorch or other software dependencies.
Experiment Setup Yes Adam is used as an optimizer. The batch size is 8. To prevent overfitting, a strategy of changing learning rates is used in training. The initial learning rate is multiplied by 0.8 every 30 iterations. The initial learning rate is 6 10 6 on Cata7 and 3 10 5 on Endo Vis 2017.