FAS-Net: Construct Effective Features Adaptively for Multi-Scale Object Detection
Authors: Jiangqiao Yan, Yue Zhang, Zhonghan Chang, Tengfei Zhang, Menglong Yan, Wenhui Diao, Hongqi Wang, Xian Sun12573-12580
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on PASCAL07/12 and MSCOCO17 demonstrate the effectiveness and generalization of the proposed method. |
| Researcher Affiliation | Academia | 1Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China 2Key Laboratory of Network Information System Technology(NIST), Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China 3School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100190, China |
| Pseudocode | No | The paper describes the proposed method in textual form and through diagrams (Figure 1, 2, 3) but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions mmdetection (https://github.com/open-mmlab/mmdetection) and PyTorch (https://pytorch.org/) as frameworks used for re-implementation, but it does not provide a direct link to the specific source code of the FAS-Net methodology itself or explicitly state that their code is released. |
| Open Datasets | Yes | We conduct experiments on three widely used datasets: PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO 2017. All network backbones are pretrained on the Image Net 2012 dataset (Russakovsky et al. 2015) and fine-tuned on the detection dataset. |
| Dataset Splits | Yes | For this evaluation, we train detectors on the VOC2007 and VOC2012 trainval set and test them on the VOC2007 test set. |
| Hardware Specification | Yes | All experiments on VOC dataset are trained with a single RTX 2080 GPU, CUDA 10 and cu DNN 7, without parallel and distributed training. |
| Software Dependencies | Yes | All experiments on VOC dataset are trained with a single RTX 2080 GPU, CUDA 10 and cu DNN 7, without parallel and distributed training. |
| Experiment Setup | Yes | We initialize the learning rate as 1 10 3, and then decrease it to 1 10 4 at 9 epochs, and stop at 12 epochs. The batch size is set to 1 to remove the impact of Batch Normalization on network performance. |