Dual-Resolution Correspondence Networks
Authors: Xinghui Li, Kai Han, Shuda Li, Victor Prisacariu
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We comprehensively evaluate our method on large-scale public benchmarks including HPatches, In Loc, and Aachen Day-Night. It achieves the state-of-the-art results on all of them. |
| Researcher Affiliation | Academia | 1Active Vision Lab, University of Oxford 1{xinghui, shuda, victor}@robots.ox.ac.uk 2Visual Geometry Group, University of Oxford 2khan@robots.ox.ac.uk |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code can be found at https://code.active.vision. |
| Open Datasets | Yes | We train our models on Mega Depth dataset [50], which consists of a large number of internet images about 196 scenes and their sparse 3D point clouds are constructed by COLMAP [51, 52]. |
| Dataset Splits | Yes | We use the scenes with more than 500 valid image pairs for training and the rest scenes for validation. To avoid scene bias, 110 image pairs are randomly selected from each training scene to constitute our training set. In total, we obtain 15, 070 training pairs and 14, 638 validation pairs. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions implementing the pipeline in Pytorch [48], but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We train our model using Adam optimizer [49] for 15 epochs with an initial learning rate of 0.01 which is halved every 5 epochs. |