Extracting Deformation-Aware Local Features by Learning to Deform
Authors: Guilherme Potje, Renato Martins, Felipe Chamone, Erickson Nascimento
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments show that our method outperforms state-of-the-art handcrafted, learning-based image, and RGB-D descriptors in different datasets with both real and realistic synthetic deformable objects in still images. |
| Researcher Affiliation | Academia | Guilherme Potje Universidade Federal de Minas Gerais Renato Martins Université Bourgogne Franche-Comté Felipe Cadar Universidade Federal de Minas Gerais Erickson R. Nascimento Universidade Federal de Minas Gerais Department of Computer Science, Universidade Federal de Minas Gerais, Brazil. VIBOT EMR CNRS 6000, Im Vi A, Université Bourgogne Franche-Comté, France. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and trained model of the descriptor are publicly available at https: //www.verlab.dcc.ufmg.br/descriptors/neurips2021. |
| Open Datasets | Yes | We evaluate our descriptor in different publicly available datasets containing deformable objects in diverse viewing conditions such as illumination, viewpoint, and deformation. For that, we have selected the two datasets recently proposed by Geo Bit [25] and De Sur T [45]. The training data and source code are available at www.verlab.dcc.ufmg.br/descriptors/ neurips2021. |
| Dataset Splits | Yes | Table 1 shows the MMA average achieved by testing different hyperparameters on the Bag sequence [25], which we used as a validation set and removed it from the benchmark experiments. |
| Hardware Specification | Yes | Our network implementation has 3.7M trainable parameters and takes about 5.5 hours to train on a Ge Force GTX 1080 Ti GPU. ... running on a Intel (R) Core (TM) i7-7700 CPU @ 3.60 GHz and a GTX 1080 Ti GPU. |
| Software Dependencies | No | The paper states 'We implement our network using Py Torch [28]' but does not provide a specific version number for PyTorch or any other software dependencies, which is required for reproducible setup details. |
| Experiment Setup | Yes | We implement 2 our network using Py Torch [28] and optimize it via Adam with initial learning rate of 5e 5, scaling it by 0.9 every 3, 800 steps. ... We used a batch size of 8 image pairs containing up to 128 keypoint correspondences for each pair in our setup. ... we train the network for 10 epochs. ... The baseline model uses the margin µ = 0.5, no anchor swap, STN output of 3 3 and Dropout with probability p = 0.1 in the FC layers. |