RepPoints v2: Verification Meets Regression for Object Detection

Authors: Yihong Chen, Zheng Zhang, Yue Cao, Liwei Wang, Stephen Lin, Han Hu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on the challenging MS COCO 2017 benchmark [17], which is split into train, val and test-dev sets with 115K, 5K and 20K images, respectively. We train all the models using the train set and conduct an ablation study on the val set. A system-level comparison to other methods is reported on the test-dev set.
Researcher Affiliation Collaboration 1Center of Data Science, Peking University 2Microsoft Research Asia 3Key Laboratory of Machine Perception, MOE, School of EECS, Peking University 4Zhejiang Lab
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks that are clearly labeled as such.
Open Source Code Yes The code is available at https://github.com/Scalsol/Rep Points V2.
Open Datasets Yes We conduct experiments on the challenging MS COCO 2017 benchmark [17]
Dataset Splits Yes MS COCO 2017 benchmark [17], which is split into train, val and test-dev sets with 115K, 5K and 20K images, respectively.
Hardware Specification Yes the speed of Rep Points v1 is 12.7 FPS (img/s) using Res Net50 on a Titan XP GPU
Software Dependencies No We use the mmdetection codebase [2] for experiments. While mmdetection is mentioned, a specific version number for this software dependency is not provided.
Experiment Setup Yes All experiments perform training with an SGD optimizer on 8 GPUs with 2 images per GPU, using an initial learning rate of 0.01, a weight decay of 0.0001 and momentum of 0.9. In ablations, most experiments follow the 1x settings where 12 epochs with single-scale training of [800, 1333] are used, with learning rate decayed by 10 after epoch 8 and 11.