Augmenting Anchors by the Detector Itself

Authors: Xiaopei Wan, Guoqiu Li, Yujiu Yang, Zhenhua Guo

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on COCO dataset demonstrate the effectiveness of AADI... Code and models are available at https://github.com/Wan Xiaopei/aadi.
Researcher Affiliation Collaboration 1Ant Group 2Tsinghua University 3Alibaba Group
Pseudocode Yes Algorithm 1 Pseudocode of AADI
Open Source Code Yes Code and models are available at https://github.com/Wan Xiaopei/aadi.
Open Datasets Yes Our experiments are implemented on the challenging Microsoft COCO 2017 [Lin et al., 2014] dataset.
Dataset Splits Yes It consists of 118k images for training, train-2017 and 5k images for validation, val-2017. There are also 20k images without annotations for test in test-dev.
Hardware Specification Yes And our codes are deployed on a machine with 8 Tesla V100SXM2-16GB GPUs and Intel Xeon Platinum 8163 CPU.
Software Dependencies Yes Our software environment mainly includes Ubuntu 18.04 LTS OS, CUDA 10.1, and Py Torch [Paszke et al., 2017] 1.6.0.
Experiment Setup Yes The hyper-parameters of our model are following detectron2, and all models are based on FPN [Lin et al., 2017]. For AADI-RPN, the dilations of RPNc is set to 2 and 4, and these two RPNc are not sharing parameters, moreover. For AADI-Retina Net, the dilation is set to 2. Lreg and Lcls are the box regression loss and the classification loss, respectively. And these two loss terms are balanced by λ. In the implementation, binary cross entropy loss and Smooth L1 Loss [Girshick, 2015] are used as classification loss and box regression loss, respectively, and λ is empirically set to 5.