Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection

Authors: NA DONG, Yongqiang Zhang, Mingli Ding, Gim Hee Lee

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the PASCAL VOC and MS COCO datasets show that our proposed method significantly outperforms other state-of-the-art class-incremental object detection methods when there is no co-occurrence between the base and novel classes during training.
Researcher Affiliation Academia Na Dong1,2 Yongqiang Zhang2 Mingli Ding2 Gim Hee Lee1 1Department of Computer Science, National University of Singapore 2School of Instrument Science and Engineering, Harbin Institute of Technology {dongna1994, zhangyongqiang, dingml}@hit.edu.cn gimhee.lee@comp.nus.edu.sg
Pseudocode No The paper describes its approach in detail using prose and mathematical equations but does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes Our source code is available at https://github.com/dongnana777/Bridging-Non-Co-occurrence.
Open Datasets Yes Following [27], we evaluate our proposed method for class-incremental object detection on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Dataset Splits Yes PASCAL VOC 2007 consists of about 5K training and validation images and 5K test images over 20 object categories. Models are trained on the trainval set and tested on the test set. MS COCO 2014 contains objects from 80 different categories with 83K images in the training set and 41K images in the validation set. We train models on the training set and evaluate models on the first 5K images of the validation set.
Hardware Specification Yes The training is carried out on 1 RTX 2080Ti GPU, and the batch size is set to 1.
Software Dependencies No The paper mentions using "Res Net-50 with frozen batch normalization layers as the backbone network" and "stochastic gradient descent with Nesterov momentum", but does not specify version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes The initial learning rate is set to 1e-3 and subsequently reduced by 0.1 after every 5 epochs for the previous model and the current model. Each model is trained for 20 epochs for both PASCAL VOC and MS COCO datasets. The training is carried out on 1 RTX 2080Ti GPU, and the batch size is set to 1.