SCNet: Training Inference Sample Consistency for Instance Segmentation

Authors: Thang Vu, Haeyong Kang, Chang D. Yoo2701-2709

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the standard COCO dataset reveal the effectiveness of the proposed method over multiple evaluation metrics, including box AP, mask AP, and inference speed.
Researcher Affiliation Academia Thang Vu, Haeyong Kang, Chang D. Yoo Department of Electrical Engineering, Korea Advanced Institute of Science and Technology {thangvubk,haeyong.kang,cd yoo}@kaist.ac.kr
Pseudocode No The paper includes architectural diagrams but no structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/thangvubk/SCNet.
Open Datasets Yes The COCO (Lin et al. 2014) train and val splits are using for training and inference, respectively.
Dataset Splits Yes The COCO (Lin et al. 2014) train and val splits are using for training and inference, respectively.
Hardware Specification Yes It takes about one day for the models to converge on 8 Tesla V100 GPUs. ... The runtime is measured on a single Tesla V100 GPU.
Software Dependencies No Py Torch (Paszke et al. 2017) and MMDetection (Chen et al. 2019b) are used for implementation. Specific version numbers for these software components are not provided.
Experiment Setup Yes The stage loss weights and semantic loss weight, which are adopted from (Chen et al. 2019a), are set to α = [1, 0.5, 0.25] and γ = 0.2, respectively. The global context loss weight is set to λ = 3. In all experiments, the long edge and short edge of the images are resized to 1333 and 800, respectively, without changing the aspect ratio. ... The learning rate is initialized to 0.02 and divided by 10 after 16 and 19 epochs, respectively.