Towards Evidential and Class Separable Open Set Object Detection
Authors: Ruofan Wang, Rui-Wei Zhao, Xiaobo Zhang, Rui Feng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets demonstrate the outperformance of the proposed method over existing ones. |
| Researcher Affiliation | Academia | 1School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 2Academy for Engineering and Technology, Fudan University, Shanghai 3Children s Hospital of Fudan University, National Children s Medical Center, Shanghai, China 4Shanghai Collaborative Innovation Center of Intelligent Visual Computing |
| Pseudocode | Yes | Algorithm 1: Testing procedure of EOD |
| Open Source Code | Yes | Codes are available at https://github.com/roywang021/EOD. |
| Open Datasets | Yes | Following the OSOD benchmark in (Han et al. 2022), the trainval set and the test set of PASCAL VOC (Everingham et al. 2010) are used for training and closed set evaluation. VOC-COCO-{20, 40, 60} and VOC-COCO-{2500, 5000, 20000} are used to evaluate the performance under different open set settings. |
| Dataset Splits | Yes | The trainval set and the test set of PASCAL VOC (Everingham et al. 2010) are used for training and closed set evaluation. |
| Hardware Specification | No | All models are trained on 8 GPUs with a batch size of 16. The specific model of GPUs is not mentioned. |
| Software Dependencies | No | We adopt the default learning rate schedule of Detectron2 (Wu et al. 2019), and use the SGD optimizer... While 'Detectron2' is mentioned, no specific version number for it or other software dependencies is provided. |
| Experiment Setup | Yes | We adopt the default learning rate schedule of Detectron2 (Wu et al. 2019), and use the SGD optimizer with an initial learning rate of 0.02, momentum of 0.9, and weight decay of 0.0001. The max iteration is set to 35000. All models are trained on 8 GPUs with a batch size of 16. Annealing coefficients γt, λt are set as min(1.0, max(0.0, (t 20000)/10000)) and min(1.0, t/25000), where t is the index of the current iteration. Weighting factors ν for contrastive learning is set to 0.4 and β for regularization term is set to 0.05. The uncertainty threshold τf for unknown identification is set to 0.02 by default. |