Learning Transferable UAV for Forest Visual Perception

Authors: Lyujie Chen, Wufan Wang, Jihong Zhu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation and real-world flight with a variety of appearance and environment changes are both tested. The Res Net-18 adaptation and its variant model achieve the best result of 84.08% accuracy in reality.
Researcher Affiliation Academia Lyujie Chen, Wufan Wang, Jihong Zhu Beijing National Research Center for Information Science and Technology (BNRist) Department of Computer Science and Technology, Tsinghua University, Beijing, China {chenlj16, wwf14, jhzhu}@mails.tsinghua.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'Additional videos and the full training/testing datasets are available at https://sites.google.com/view/forest-trail-dataset.' This link refers to datasets and videos, but does not explicitly mention the source code for the methodology described in the paper.
Open Datasets Yes Additional videos and the full training/testing datasets are available at https://sites.google.com/view/forest-trail-dataset.
Dataset Splits Yes Task Training Data Source Number of Training Data Validation Data Source Number of Validation Data Test Data Source Number of Test Data
Hardware Specification Yes It requires about 5 hours on a server equipped with an NVIDIA Titan X GPU.
Software Dependencies No The model is implemented in Caffe [Jia et al., 2014] and trained using standard backpropagation.
Experiment Setup Yes The initial learning rate is set to be 0.05. It requires about 5 hours on a server equipped with an NVIDIA Titan X GPU. Then, we introduce the unlabeled data in target domain to train a Res Net-18 adaptation network. We set its learning rate to be 0.003 and use the SGD with 0.75 momentum. After every 300 iterations of training, a test will be conducted on validation set.