Learning Modulated Loss for Rotated Object Detection

Authors: Wen Qian, Xue Yang, Silong Peng, Junchi Yan, Yue Guo2458-2466

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results using one stage and two stages detectors demonstrate the effectiveness of our loss. The integrated network achieves competitive performances on several benchmarks including DOTA and UCAS AOD.
Researcher Affiliation Academia 1Institute of Automation, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3Department of Computer Science and Engineering, Shanghai Jiao Tong University 4Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Pseudocode No The paper contains mathematical equations (1-6) but no explicitly labeled 'Pseudocode' or 'Algorithm' block, nor structured steps formatted like code or an algorithm.
Open Source Code Yes The code is available at https://github.com/ yangxue0827/Rotation Detection.
Open Datasets Yes DOTA (Xia et al. 2018): The main experiments are carried out around DOTA which has a total of 2,806 aerial images and 15 categories... ICDAR2015 (Karatzas et al. 2015): a scene text dataset... HRSC2016 (Liu et al. 2017): HRSC2016 is a dataset for ship detection... UCAS-AOD (Zhu et al. 2015): UCAS-AOD is a remote sensing dataset
Dataset Splits Yes The proportions of the training set, the validation set, and the test set are respectively 1/2, 1/6, and 1/3. There are 188,282 instances for training and validation
Hardware Specification Yes Experiments are implemented by Tensorflow (Abadi et al. 2016) on a server with Ubuntu 16.04, NVIDIA GTX 2080 Ti, and 11G Memory.
Software Dependencies No The paper mentions 'Tensorflow' and 'Ubuntu 16.04' but does not specify a version number for TensorFlow (e.g., 'TensorFlow 2.x') or a specific patch version for Ubuntu (e.g., 'Ubuntu 16.04.x').
Experiment Setup Yes Besides, weight decay and momentum are correspondingly 1e-4 and 0.9. The training epoch is 30 in total, and the number of iterations per epoch depends on the number of samples in the dataset. The initial learning rate is 5e-4, and the learning rate changes from 5e-5 at epoch 18 to 5e-6 at epoch 24. In the first quarter of the training epochs, we adopt the warm-up strategy to find a suitable learning rate.