Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning
Authors: Na Dong, Yongqiang Zhang, Mingli Ding, Gim Hee Lee
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on standard incremental object detection and incremental few-shot object detection settings show that our approach significantly outperforms state-of-the-art methods by a large margin. |
| Researcher Affiliation | Academia | Na Dong1,2*, Yongqiang Zhang2, Mingli Ding2, Gim Hee Lee1 1 Department of Computer Science, National University of Singapore 2 School of Instrument Science and Engineering, Harbin Institute of Technology |
| Pseudocode | No | The paper describes methods and formulas in text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https://github.com/dongnana777/Incremental-DETR. |
| Open Datasets | Yes | We conduct the evaluation on the popular object detection benchmark MS COCO 2017 (Lin et al. 2014) which covers 80 object classes. ... Specifically, we conduct the experimental evaluations on two widely used object detection benchmarks: MS COCO 2017 and PASCAL VOC 2007 (Everingham et al. 2010). |
| Dataset Splits | No | The paper specifies using the 'val' set of COCO and the 'test' set of VOC as testing data, but does not explicitly mention a separate validation set for hyperparameter tuning or early stopping during training. |
| Hardware Specification | Yes | The training is carried out on 8 RTX 6000 GPUs with a batch size of 2 per GPU. |
| Software Dependencies | No | The paper mentions 'Res Net-50' as a feature extractor and refers to 'Deformable DETR (Zhu et al. 2020)' and 'Adam W optimizer' but does not specify versions for underlying software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | λ , λfeat and λcls are set to 1, 0.1 and 2, respectively. We train our model using the Adam W optimizer and an initial learning rate of 2 10 4 and a weight decay of 1 10 4. We train our model for 50 epochs and the learning rate is decayed at 40th epoch by a factor of 0.1. ... We fine-tune the model for 1 epoch with a learning rate of 2 10 4. |