Optimal Feature Transport for Cross-View Image Geo-Localization
Authors: Yujiao Shi, Xin Yu, Liu Liu, Tong Zhang, Hongdong Li11990-11997
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on large-scale datasets have demonstrated that our method has remarkably boosted the state-of-the-art cross-view localization performance, e.g., on the CVUSA dataset, with significant improvements for top-1 recall from 40.79% to 61.43%, and for top-10 from 76.36% to 90.49%. |
| Researcher Affiliation | Collaboration | Yujiao Shi,1,2 Xin Yu,1,2 Liu Liu,1,2 Tong Zhang,1,3 Hongdong Li1,2 1Australian National University, Canberra, Australia. 2Australian Centre for Robotic Vision, Australia. 3Motovis Australia Pty Ltd firstname.lastname@anu.edu.au |
| Pseudocode | No | The paper does not include any explicit pseudocode blocks or algorithms. |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating the release of source code for the described methodology. |
| Open Datasets | Yes | We conduct our experiments on two standard benchmark datasets, namely CVUSA (Zhai et al. 2017) and CVACT (Liu and Li 2019), for evaluation and comparisons. |
| Dataset Splits | Yes | CVUSA provides 8,884 image pairs for testing and CVACT provides the same number of pairs for validation (denoted as CVACT val). |
| Hardware Specification | No | The paper mentions 'GPU gift donated by NVIDIA Corporation' but does not specify exact GPU models, CPU models, or other detailed hardware specifications used for experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'VGG16 with pretrained weights on Image Net' but does not provide specific version numbers for any software dependencies like Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | We set γ to 10 for the weighted softmargin triplet loss. Our network is trained using Adam optimizer (Kingma and Ba 2014) with a learning rate of 10 5 and batch size of Bs = 12. We exploit an exhaustive minibatch strategy (Vo and Hays 2016) to construct the maximum number of triplets within each batch. |