What Makes for End-to-End Object Detection?

Authors: Peize Sun, Yi Jiang, Enze Xie, Wenqi Shao, Zehuan Yuan, Changhu Wang, Ping Luo

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments are conducted on the challenging COCO benchmark (Lin et al., 2014). We use the standard COCO metrics AP of averaging over Io U thresholds. All models are trained on train2017 split ( 118k images) and evaluated with val2017 (5k images).
Researcher Affiliation Collaboration 1Department of Computer Science, The University of Hong Kong 2AI Lab, Byte Dance 3Department of Electronic Engineering, The Chinese University of Hong Kong.
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes Our experiments are conducted on the challenging COCO benchmark (Lin et al., 2014). We use the standard COCO metrics AP of averaging over Io U thresholds. All models are trained on train2017 split ( 118k images) and evaluated with val2017 (5k images). Crowd Human (Shao et al., 2018) is a widely-used benchmark for crowded object detection
Dataset Splits Yes All models are trained on train2017 split ( 118k images) and evaluated with val2017 (5k images).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) that were used in the experiments.
Experiment Setup No The paper describes general experimental settings like detectors and datasets used but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed training configurations in the main text.