Cross-View Geo-Localization via Learning Disentangled Geometric Layout Correspondence
Authors: Xiaohan Zhang, Xingyu Li, Waqas Sultani, Yi Zhou, Safwan Wshah
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Geo DTR not only achieves state-of-the-art results but also significantly boosts the performance on same-area and cross-area benchmarks. To evaluate the effectiveness of Geo DTR, we conduct extensive experiments on two datasets, CVUSA (Workman, Souvenir, and Jacobs 2015), and CVACT (Liu and Li 2019). |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Vermont, Burlington, USA 2Vermont Complex Systems Center, University of Vermont, Burlington, USA 3 Shanghai Center for Brain Science and Brain-Inspired Technology, China 4 Intelligent Machine Lab, Information Technology University, Pakistan 5 NEL-BITA, School of Information Science and Technology, University of Science and Technology of China, China |
| Pseudocode | No | The paper provides architectural diagrams and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code can be found at https://gitlab.com/vail-uvm/geodtr. |
| Open Datasets | Yes | To evaluate the effectiveness of Geo DTR, we conduct extensive experiments on two datasets, CVUSA (Workman, Souvenir, and Jacobs 2015), and CVACT (Liu and Li 2019). |
| Dataset Splits | Yes | Both CVUSA and CVACT contain 35, 532 training pairs. CVUSA provides 8, 884 pairs for testing and CVACT has the same number of pairs in its validation set (CVACT val). Besides, CVACT provides a challenging and large-scale testing set (CVUSA test) which contains 92, 802 pairs. |
| Hardware Specification | Yes | We train the model on a single Nvidia V100 GPU for 200 epochs with Adam W (Loshchilov and Hutter 2017) optimizer. |
| Software Dependencies | No | The paper mentions "Adam W (Loshchilov and Hutter 2017) optimizer" but does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | α and β are set to 10 and 5 respectively. We train the model on a single Nvidia V100 GPU for 200 epochs with Adam W (Loshchilov and Hutter 2017) optimizer. |