HyNet: Learning Local Descriptor with Hybrid Similarity Measure and Triplet Loss
Authors: Yurun Tian, Axel Barroso Laguna, Tony Ng, Vassileios Balntas, Krystian Mikolajczyk
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Hy Net surpasses previous methods by a significant margin on standard benchmarks that include patch matching, verification, and retrieval, as well as outperforming full end-to-end methods on 3D reconstruction tasks. |
| Researcher Affiliation | Collaboration | 1 Imperial College London 2 Facebook Reality Labs |
| Pseudocode | No | The paper describes the network architecture and mathematical formulations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format. |
| Open Source Code | Yes | Codes and models are available at https://github.com/yuruntian/Hy Net. |
| Open Datasets | Yes | UBC dataset [7] consists of three subset-scenes, namely Liberty, Notredame and Yosemite. |
| Dataset Splits | Yes | Following the evaluation protocol [7], models are trained on one subset and tested on the other two. |
| Hardware Specification | No | The paper describes training parameters (e.g., epochs, batch size, optimizer) but does not provide specific details regarding the hardware used for running the experiments (e.g., GPU model, CPU type, memory). |
| Software Dependencies | No | Our novel architecture and training is implemented in Py Torch [32]. |
| Experiment Setup | Yes | The network is trained for 200 epochs with a batch size of 1024 and Adam optimizer [20]. Training starts from scratch, and the threshold τ in TLU for each layer is initialised with 1. We set α = 2 and γ = 0.1. |