LCD: Learned Cross-Domain Descriptors for 2D-3D Matching
Authors: Quang-Hieu Pham, Mikaela Angelina Uy, Binh-Son Hua, Duc Thanh Nguyen, Gemma Roig, Sai-Kit Yeung11856-11864
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results confirm the robustness of our approach as well as its competitive performance not only in solving cross-domain tasks but also in being able to generalize to solve sole 2D and 3D tasks. |
| Researcher Affiliation | Academia | 1Singapore University of Technology and Design, 2Stanford University, 3The University of Tokyo, 4Deakin University, 5Geothe University of Frankfrut am Main, 6Hong Kong University of Science and Technology |
| Pseudocode | No | The paper includes a network architecture diagram (Figure 1) which describes the components of the network, but it does not provide pseudocode or a clearly labeled algorithm block with structured steps. |
| Open Source Code | Yes | Our dataset and code are released publicly at https://hkust-vgd.github.io/lcd. |
| Open Datasets | Yes | Our dataset and code are released publicly at https://hkust-vgd.github.io/lcd. ... In this work, we use the data from Scene NN (Hua et al. 2016) and 3DMatch (Zeng et al. 2017). |
| Dataset Splits | No | We follow the same train and test splits from (Zeng et al. 2017) and (Hua, Tran, and Yeung 2018). Our training dataset consists of 110 RGB-D scans, of which 56 scenes are from Scene NN and 54 scenes are from 3DMatch. While training and testing splits are mentioned, a distinct validation set or its split percentage is not explicitly stated. |
| Hardware Specification | Yes | We train our network on a cluster equipped with NVIDIA V100 GPUs and 256 GB of memory. |
| Software Dependencies | No | The paper states, 'Our network is implemented in Py Torch,' but it does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Our network is implemented in Py Torch. The network is trained using SGD optimizer, with learning rate set to 0.01. ... It takes around 17 hours to train our network, stopping after 250 epochs. |