Neural Non-Rigid Tracking

Authors: Aljaz Bozic, Pablo Palafox, Michael Zollhöfer, Angela Dai, Justus Thies, Matthias Niessner

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments In the following, we evaluate our method quantitatively and qualitatively on both non-rigid tracking and non-rigid reconstruction. To this end, we use the Deep Deform dataset [4] for training, with the given 340-30-30 train-val-test split of RGB-D sequences. Both non-rigid tracking and reconstruction are evaluated on the hidden test set of the Deep Deform benchmark.
Researcher Affiliation Academia 1Technical University of Munich 2Stanford University
Pseudocode Yes Algorithm 1 Gauss-Newton Optimization
Open Source Code Yes We make our code available at https://github.com/Deformable Friends/Neural Tracking.
Open Datasets Yes To this end, we use the Deep Deform dataset [4] for training, with the given 340-30-30 train-val-test split of RGB-D sequences.
Dataset Splits Yes To this end, we use the Deep Deform dataset [4] for training, with the given 340-30-30 train-val-test split of RGB-D sequences.
Hardware Specification Yes We use an Intel Xeon 6240 Processor and an Nvidia RTX 2080Ti GPU.
Software Dependencies No The paper states, 'The non-rigid tracking module has been implemented using the Py Torch library [24],' and mentions 'PWC-Net model [30],' but it does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes The non-rigid tracking module has been implemented using the Py Torch library [24] and trained using stochastic gradient descent with momentum 0.9 and learning rate 10 5. We use a 10-factor learning rate decay every 10k iterations, requiring about 30k iterations in total for convergence, with a batch size of 4. For optimal performance, we first optimize the correspondence predictor Φφ with (λcorr, λgraph, λwarp) = (5, 5, 5), without the weighting function Ψψ. Afterwards, we optimize the weighting function parameters ψ with (λcorr, λgraph, λwarp) = (0, 1000, 1000), while keeping φ fixed. Finally, we fine-tune both φ and ψ together, with (λcorr, λgraph, λwarp) = (5, 5, 5). In our experiments we use (λ2D, λdepth, λreg) = (0.001, 1, 1).