Described Object Detection: Liberating Object Detection with Flexible Expressions
Authors: Chi Xie, Zhao Zhang, Yixuan Wu, Feng Zhu, Rui Zhao, Shuang Liang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | By evaluating previous SOTA methods on D3, we find some troublemakers that fail current REC, OVD, and bi-functional methods. Building upon the aforementioned findings, we propose a baseline that largely improves REC methods by reconstructing the training data and introducing a binary classification sub-task, outperforming existing methods. |
| Researcher Affiliation | Collaboration | Chi Xie1 Zhao Zhang2 Yixuan Wu3 Feng Zhu2 Rui Zhao2 Shuang Liang1 1Tongji University 2Sensetime Research 3Zhejiang University |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Data and code are available at this URL and related works are tracked in this repo. |
| Open Datasets | Yes | For DOD, we introduce the Description Detection Dataset (D3, /dikju:b/)... Data and code are available at this URL and related works are tracked in this repo. |
| Dataset Splits | No | The paper describes D3 as an 'evaluation-only benchmark' and details evaluation metrics (mAP, intra/inter-scenario) computed on it. However, it does not explicitly provide training, validation, and test splits for D3 as a dataset for its own model's development. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper does not provide specific version numbers for ancillary software dependencies (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | No | The paper describes modifications made to OFA and how data was reconstructed, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |