Deformable Part Region Learning for Object Detection
Authors: Seung-Hwan Bae95-103
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Without bells and whistles, our implementation of a Cascade deformable part region detector achieves better detection and segmentation m APs on COCO and VOC datasets, compared to the recent cascade and other state-of-the-art detectors. |
| Researcher Affiliation | Academia | Seung-Hwan Bae Vision and Learning Laboratory, Inha University, Korea shbae@inha.ac.kr |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing open-source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | Our D-PRD and Cascade D-PRD are evaluated on MSCOCO17 (Lin et al. 2014) and PASCAL VOC07/12 (Everingham et al. 2015) datasets. |
| Dataset Splits | No | The paper mentions using COCO and VOC datasets and their trainval/test sets but does not explicitly provide specific percentages, sample counts for training, validation, and test splits, or explicit citations for the exact splits used for reproduction. |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments (e.g., specific GPU/CPU models, memory, or cloud instances). |
| Software Dependencies | No | The paper mentions "We use the Detectron2." but does not provide specific version numbers for Detectron2 or its underlying software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | We use the default learning schedules 1x or 3x ( 12 or 37 COCO epochs) of Detectron2 for all the evaluation below. Also, all other setting parameters for training and testing are same to those of Detectron2. We set the Io U threshold to (0.5, 0.6, 0.7) from the first to last stage. |