BA-Net: Dense Bundle Adjustment Networks

Authors: Chengzhou Tang, Ping Tan

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on large scale real data prove the success of the proposed method. To demonstrate the effectiveness of our method, we evaluate on the Scan Net (Dai et al., 2017a) and KITTI (Geiger et al., 2012) dataset. Our method outperforms De Mo N (Ummenhofer et al., 2017), LS-Net (Clark et al., 2018), as well as several conventional baselines. Due to page limit, we move the ablation studies, evaluation on De Mo N s dataset, multi-view Sf M (up to 5 views), and comparison with Code SLAM on the Euro C dataset (Burri et al., 2016) to the appendix.
Researcher Affiliation Academia Chengzhou Tang School of Computer Science Simon Fraser University chengzhou_tang@sfu.ca Ping Tan School of Computer Science Simon Fraser University pingtan@sfu.ca
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of source code or a link to a code repository for the described methodology.
Open Datasets Yes To demonstrate the effectiveness of our method, we evaluate on the Scan Net (Dai et al., 2017a) and KITTI (Geiger et al., 2012) dataset.
Dataset Splits No The paper explicitly describes training and testing splits for ScanNet and refers to existing splits for KITTI, but does not explicitly provide details for a validation dataset split.
Hardware Specification No The paper mentions 'limited GPU memory (12G)' but does not specify the exact GPU model, CPU, or any other specific hardware components used for experiments.
Software Dependencies No The paper mentions using 'Tensorflow' but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes We apply the differentiable LM algorithm for 5 iterations at each pyramid level, leading to 15 iterations in total. We initialize the back-bone network from DRN-54 (Yu et al., 2017), and the other components are trained with ADAM (Kingma & Ba, 2015) from scratch with initial learning rate 0.001, and the learning rate is divided by two when we observe plateaus from the Tensorboard interface.