Calibrated Teacher for Sparsely Annotated Object Detection
Authors: Haohan Wang, Liang Liu, Boshen Zhang, Jiangning Zhang, Wuhao Zhang, Zhenye Gan, Yabiao Wang, Chengjie Wang, Haoqian Wang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our methods set new state-of-the-art under all different sparse settings in COCO. |
| Researcher Affiliation | Collaboration | Haohan Wang1*, Liang Liu2* , Boshen Zhang2, Jiangning Zhang2, Wuhao Zhang2, Zhenye Gan2, Yabiao Wang2 , Chengjie Wang23 , Haoqian Wang1 1 Shenzhen International Graduate School, Tsinghua University 2 Tencent Youtu Lab 3 Shanghai Jiao Tong University |
| Pseudocode | Yes | Algorithm 1: Pseudo labels in Calibrated Teacher |
| Open Source Code | No | Code will be available at https://github.com/Whileherham/Calibrated Teacher. |
| Open Datasets | Yes | Recent SAOD methods (Yang, Liang, and Carin 2020; Wang et al. 2021; Zhang et al. 2020; Rambhatla et al. 2022) are mainly evaluated on the challenging COCO-2017 dataset (Lin et al. 2014) |
| Dataset Splits | No | In the field of model calibration, the parameters of calibrators are optimized with the validation set at the end of training. However, there exist two extra challenges in our framework. The first challenge is that there is no validation set available. |
| Hardware Specification | No | The paper mentions 'Image Net...pretrained Res Net101... and Res Net50' and training for '180k iterations with a total batch size 16', but does not specify any particular hardware (GPU, CPU, or memory) used for these experiments. |
| Software Dependencies | No | The paper mentions using 'MMDetection' but does not specify version numbers for any software components or libraries. |
| Experiment Setup | Yes | Our models are trained for 180k iterations with a total batch size 16. The learning rate is initialized as 0.01 and gradually decreases to 0.001 and 0.0001 at 120k and 160k iterations. Other hyperparameters of the architecture and training schedule are consistent with the implementation in (Chen et al. 2019). As for the confidence calibration, τ +, τ , τs are set to 0.75, 0.6 and 0.7 for all detectors, respectively...For the calibrator training, T is set to 500 and L is the number of predictions of 8000 images. For the FIo U, w0 and k are set to 0.5 and 1.5, while the αt and γ stay consistent with focal loss. |