FRED: Towards a Full Rotation-Equivariance in Aerial Image Object Detection
Authors: Chanho Lee, Jinsu Son, Hyounguk Shon, Yunho Jeon, Junmo Kim
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared to state-of-the-art methods, our proposed method delivers comparable performance on DOTAv1.0 and outperforms by 1.5 m AP on DOTA-v1.5, all while significantly reducing the model parameters to 16%. Experiments Benchmark and Implementation Details DOTA dataset (Xia et al. 2018; Ding et al. 2021) is a large-scale benchmark designed for assessing oriented object detection in aerial images. |
| Researcher Affiliation | Academia | 1Korea Advanced Institute of Science and Technology, South Korea 2Hanbat National University, South Korea |
| Pseudocode | Yes | A detailed structure and the corresponding pseudo code are provided in the appendix. |
| Open Source Code | No | No explicit statement about releasing source code or a link to a code repository for FRED was found. |
| Open Datasets | Yes | DOTA dataset (Xia et al. 2018; Ding et al. 2021) is a large-scale benchmark designed for assessing oriented object detection in aerial images. |
| Dataset Splits | Yes | For experimental settings, both the training and validation sets from DOTA are combined for training, with the test set reserved for evaluations. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific computer configurations) used for running the experiments. |
| Software Dependencies | No | Our implementation is based on the MMRotate (Zhou et al. 2022) and E(2)-CNN (Weiler and Cesa 2019) framework. The paper does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Focal loss (Lin et al. 2017), convex IOU loss (Rezatofighi et al. 2019) and spatial constraint loss with APAA strategy as described in Li et al. (2022) is employed for training. We set the weight of our edge constraint loss as 0.0025 as default. The training was conducted with the stochastic gradient descent optimizer with the momentum and the weight decay set to 0.9 and 0.0001, respectively. The initial learning rate is 0.008, and the model is trained for 40 epochs with batch size 8, using a step decay schedule. |