DAC-DETR: Divide the Attention Layers and Conquer

Authors: Zhengdong Hu, Yifan Sun, Jingdong Wang, Yi Yang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to validate the effectiveness of DAC-DETR and empirically show remarkable improvement over various DETRs. For example, based on a popular baseline, i.e., Res Net-50 Deformable DETR [45], DAC-DETR brings +3.4 AP improvement and achieves 47.1 AP on MS-COCO within 12 (1 ) training epochs. On some more recent state-of-the-art methods (that usually integrate a battery of good practices), DAC-DETR still gains consistent and complementary benefits.
Researcher Affiliation Collaboration Zhengdong Hu1,2 , Yifan Sun2, Jingdong Wang2, Yi Yang3 1 Re LER, AAII, University of Technology Sydney 2 Baidu Inc. 3 CCAI, College of Computer Science and Technology, Zhejiang University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code will be made available at https://github.com/huzhengdongcs/DAC-DETR.
Open Datasets Yes We evaluate the proposed DAC-DETR on COCO 2017 [17] detection dataset.
Dataset Splits Yes We evaluate the proposed DAC-DETR on COCO 2017 [17] detection dataset. Following the common practices, we evaluate the performance on validation dataset(5k images) by using standard average precision (AP) result under different Io U thresholds.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes As for training, we use Adam W [22, 14] optimizer with weight decay of 1 10 4.