Style-Content Metric Learning for Multidomain Remote Sensing Object Recognition
Authors: Wenda Zhao, Ruikai Yang, Yu Liu, You He
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four datasets show that our style-content metric learning achieves superior generalization performance against the state-of-the-art competitors. |
| Researcher Affiliation | Academia | 1Dalian University of Technology, Dalian, China 2Tsinghua University, Beijing, China |
| Pseudocode | No | The paper describes the framework and its components using text and equations, but does not provide a formal pseudocode or algorithm block. |
| Open Source Code | Yes | Code and model are available at: https://github.com/wdzhao123/TSCM. |
| Open Datasets | Yes | We conduct experiments using four remote sensing datasets: NWPU (Cheng, Zhou, and Han 2016), DOTA (Xia et al. 2018), HRRSD (Zhang et al. 2019) and DIOR (Li et al. 2020c). |
| Dataset Splits | No | The paper describes using training and testing datasets but does not provide explicit details for a separate validation split (e.g., percentages or sample counts). |
| Hardware Specification | Yes | Our model is implemented by Pytorch on a PC with a NVIDIA RTX 2080 Ti GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'Adam' but does not specify version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We resize the image size to 256 x 256 pixels and set the batch size to 36. Adam (Kingma and Ba 2014) is used as the optimizer, and learning rate is 1.25e-4. We exponentially decay the learning rate of each parameter group by gamma set as 0.99 every epoch. Bias ε is set to 1e-6 in case that divisor and square root turn zero. The hyper-parameters are set as α = 0.1, β = 0.5 and γ = 0.5. The model is firstly trained for 48 epochs and then the last four fully-connected layers are further finetuned for 10 epochs to improve performance. |