Low-Confidence Samples Mining for Semi-supervised Object Detection
Authors: Guandu Liu, Fangyuan Zhang, Tianxiang Pan, Jun-Hai Yong, Bin Wang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On the MS-COCO benchmark, our method achieves 3.54% m AP improvement over state-of-the-art methods under 5% labeling ratios. ... Specifically, LSM introduces an additional branch called pseudo information mining (PIM) for self-learning low-confidence pseudo-labels. ... We conduct a cross-domain task and introduce DDETR baseline into SSOD. ... We carry out extensive experiments to validate the effectiveness of LSM on the MS-COCO [Lin et al., 2014], PASCAL VOC [Everingham et al., 2010], and Image Net [Deng et al., 2009] benchmarks. |
| Researcher Affiliation | Academia | Guandu Liu1,2 , Fangyuan Zhang1,2 , Tianxiang Pan1,2 , Jun-Hai Yong1,2 and Bin Wang1,2 1School of Software, Tsinghua University, China 2Beijing National Research Center for Information Science and Technology (BNRist), China {liugd21, zhangfy19}@mails.tsinghua.edu.cn, ptx9363@gmail.com, {yongjh, wangbins}@tsinghua.edu.cn |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links to open-source code or an explicit statement that the code is publicly available. |
| Open Datasets | Yes | In this section, we carry out extensive experiments to validate the effectiveness of LSM on the MS-COCO [Lin et al., 2014], PASCAL VOC [Everingham et al., 2010], and Image Net [Deng et al., 2009] benchmarks. |
| Dataset Splits | Yes | MS-COCO contains two training sets, the train2017 dataset with 118K labeled images and the unlabeled2017 dataset with 123K unlabeled images. ... We evaluate the model on COCO-val2017 for (1)(2) and VOC07-test for (3). ... For COCO-standard, the entire training steps are 180, 000, of which the first 20, 000 steps are used to pre-train the student model with labeled images. |
| Hardware Specification | No | The paper does not explicitly describe the hardware specifications (e.g., specific GPU or CPU models) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Faster-RCNN as our base object detector' and 'Deformable-DETR (DDETR)', but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For the main branch, we set pseudo boxes filtering threshold t to 0.7. While for LSM, which can have a higher tolerance for pseudo boxes, we set the threshold α to 0.5. ... For COCO-standard, the entire training steps are 180, 000, of which the first 20, 000 steps are used to pre-train the student model with labeled images. ... strong data augmentation involves random jittering, gaussian noise, crop, and weak data augmentation involves random resize and flip. |