Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity
Authors: Byungseok Roh, JaeWoong Shin, Wuhyun Shin, Saehoon Kim
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the COCO 2017 benchmark (Lin et al., 2014) demonstrate that Sparse DETR effectively reduces computational cost while achieving better detection performance. |
| Researcher Affiliation | Industry | Byungseok Roh1 , Jae Woong Shin2 , Wuhyun Shin1 , Saehoon Kim1 1Kakao Brain 2Lunit {peter.roh,aiden.hsin,sam.kim}@kakaobrain.com jwoong.shin@lunit.io |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/kakaobrain/sparse-detr. |
| Open Datasets | Yes | Extensive experiments on the COCO 2017 benchmark (Lin et al., 2014) |
| Dataset Splits | Yes | Extensive experiments on the COCO 2017 benchmark (Lin et al., 2014) demonstrate that Sparse DETR effectively reduces computational cost while achieving better detection performance. |
| Hardware Specification | Yes | We train the model on a 4 V100 GPU machine with a total batch size of 16 |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | We train the model on a 4 V100 GPU machine with a total batch size of 16, for 50 epochs, where the initial learning rate is 0.0002 and decayed by 1/10 at the 40 epoch. |