MaskBooster: End-to-End Self-Training for Sparsely Supervised Instance Segmentation

Authors: Shida Zheng, Chenshu Chen, Xi Yang, Wenming Tan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Abundant experiments are conducted on COCO and BDD100K datasets and validate the effectiveness of Mask Booster.
Researcher Affiliation Industry Hikvision Research Institute {zhengshida,chenchenshu,yangxi6,tanwenming}@hikvision.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using the MMDetection toolbox, but does not provide a specific statement or link for the open-source code of Mask Booster or its methodology.
Open Datasets Yes Abundant experiments are conducted on COCO and BDD100K datasets... COCO (Lin et al. 2014) 0.1%/1%/10% protocols and BDD100K (Yu et al. 2020).
Dataset Splits Yes COCO To fully assess an approach of Sp SIS, we randomly sample a ratio of ρ instances in COCO train2017 keeping GT masks while removing GT masks for the rest of the instances. Three datasets are constructed: COCO 0.1%, COCO 1% and COCO 10% with ρ = 0.1%/1%/10%, which have 880, 8656 and 86k GT masks, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only mentioning general settings without specification.
Software Dependencies No The paper mentions using the MMDetection toolbox but does not provide specific version numbers for it or any other software dependencies crucial for replication.
Experiment Setup Yes The optimizer we use is SGD with a momentum of 0.9. EMA ratio is set as α = 1e 3. The loss weights for pseudo masks are λh = 1 and λs = 10. For COCO 0.1%, due to the extremely limited GT masks, we set λh = 0.2 and apply Copy Paste (Ghiasi et al. 2021). All experiments are under multi-scale and 3 training schedule.