Deep Orientation Uncertainty Learning based on a Bingham Loss

Authors: Igor Gilitschenski, Roshni Sahoo, Wilko Schwarting, Alexander Amini, Sertac Karaman, Daniela Rus

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we evaluate the proposed Bingham loss on its ability to learn calibrated uncertainty estimates for orientations. This goes beyond comparing point estimates of orientations; we evaluate how well the estimated distribution of orientations can explain the data. We will also show that the Bingham distribution representation is capable of capturing ambiguity and uncertainty in SO(3) better than state-of-the-art approaches. We investigate characteristics and behaviors by training neural networks on two head-pose datasets, IDIAP (Odobez, 2003) and UPNA (Ariz et al., 2016), as well as the object pose dataset TLESS (Hodaˇn et al., 2017).
Researcher Affiliation Academia Igor Gilitschenski1, Roshni Sahoo1, Wilko Schwarting1, Alexander Amini1, Sertac Karaman2, Daniela Rus1 1 Computer Science and Artificial Intelligence Lab, MIT 2 Laboratory for Information and Decision Systems, MIT
Pseudocode No The paper describes the model and methods in prose and with diagrams (e.g., Figure 3), but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes 1Code available at https://github.com/igilitschenski/deep_bingham
Open Datasets Yes We investigate characteristics and behaviors by training neural networks on two head-pose datasets, IDIAP (Odobez, 2003) and UPNA (Ariz et al., 2016), as well as the object pose dataset TLESS (Hodaˇn et al., 2017).
Dataset Splits Yes We use the Kinect RGB single-object images all of which are split into training, test, and validation sets.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions the software framework used (PyTorch).
Software Dependencies No The paper mentions the use of PyTorch and Scipy, but does not provide specific version numbers for these software dependencies, which are necessary for reproducible setup.
Experiment Setup Yes All models were implemented in Py Torch and optimized with the Adam optimizer... In the first stage, we only learn to predict M and assume the dispersion to be fixed with Z = diag( a, a, a, 0)... In the second stage, we train to predict M and Z jointly... we train orientation estimation models for 5 epochs using the Bingham loss (BD-5) and the Von Mises loss (VM-5)... Each stage is carried out for 30 epochs.