DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection

Authors: Gang Li, Xiang Li, Yujie Wang, Wu Yichao, Ding Liang, Shanshan Zhang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on COCO benchmark under Partially Labeled Data and Fully Labeled Data settings.
Researcher Affiliation Collaboration 1Nanjing University of Science and Technology 2Sense Time Research 3Nankai University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code will be released at: https://github.com/ligang-cs/DTG-SSOD.
Open Datasets Yes We benchmark our proposed method on the challenging dataset, MS COCO [34].
Dataset Splits Yes The val2017 set with 5k images is used as the validation set. We describe two training settings as follows: Partially Labeled Data. The train2017 set, consisting of 118k labeled images, is used as the training dataset, from which, we randomly sample 1%,2%,5%, and 10% images as labeled data, and set the remaining unsampled images as unlabeled data. Following the practice of previous methods [28, 9, 13], for each labeling ratio, 5 different folds are provided and the final result is the average of these 5 folds.
Hardware Specification Yes The model is trained for 180k iterations on 8 V100 GPUs with an initial learning rate of 0.01
Software Dependencies No The paper mentions using Faster RCNN, FPN, ResNet50, and SGD but does not specify software dependencies with version numbers (e.g., PyTorch, TensorFlow versions).
Experiment Setup Yes The model is trained for 180k iterations on 8 V100 GPUs with an initial learning rate of 0.01, which is then divided by 10 at 120k iteration and again at 160k iteration. Mini-batch size per GPU is 5, with 1 labeled image and 4 unlabeled images. The loss weight of unlabeled images α is set to 4.0. ... We set τ to 0.9 in NMS of the RPN stage, and 0.45 in NMS of the R-CNN stage, empirically.