Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection

Authors: Tiancai Wang, Tong Yang, Jiale Cao, Xiangyu Zhang2800-2808

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are performed on MS COCO dataset with three different sparsely annotated settings using two typical frameworks: anchorbased detector Retina Net and anchor-free detector FCOS. Experimental results show that our Co-mining with Retina Net achieves 1.4 % 2.1% improvements compared with different baselines and surpasses existing methods under the same sparsely annotated setting.
Researcher Affiliation Collaboration Tiancai Wang1, Tong Yang1, Jiale Cao2, Xiangyu Zhang1 1 MEGVII Technology 2 Tianjin University {wangtiancai, yangtong, zhangxiangyu}@megvii.com, connor@tju.edu.cn
Pseudocode Yes Algorithm 1: Our Co-mining Algorithm
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the Co-mining methodology. It only mentions re-implementing other methods due to lack of their source code.
Open Datasets Yes Experiments are conducted on the sparsely annotated versions of MS COCO dataset (Lin et al. 2014). The COCO-2017 train set with sparse annotations is used for training.
Dataset Splits Yes The COCO-2017 train set with sparse annotations is used for training. The COCO-2017 validation set with complete annotations is used for all performance evaluations.
Hardware Specification Yes We adopt 8 TITAN 2080ti GPUs with a batch size of 16 for training.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, such as deep learning frameworks or programming languages.
Experiment Setup Yes During training, there are 90k iterations in total. The learning rate is initially set to 0.01 and gradually decreases to 0.001 and 0.0001 at 60k and 80k iterations. Warm-up strategy adopted for the first 1k iterations to stabilize the training process.