On Single Source Robustness in Deep Fusion Models

Authors: Taewan Kim, Joydeep Ghosh

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that both training algorithms and our fusion layer make a deep fusion-based 3D object detector robust against noise applied to a single source, while preserving the original performance on clean data.
Researcher Affiliation Academia Taewan Kim The University of Texas at Austin Austin, TX twankim@utexas.edu Joydeep Ghosh The University of Texas at Austin Austin, TX jghosh@utexas.edu
Pseudocode Yes Algorithm 1 TRAINSSN, Algorithm 2 TRAINSSNALT
Open Source Code Yes The source code is available at https://github.com/twankim/avod_ssn.
Open Datasets Yes We test our algorithms and the LEL fusion method on 3D and BEV object detection tasks using the car class of the KITTI dataset [10].
Dataset Splits Yes We follow the split of Ku et al. [25], 3712 and 3769 frames for training and validation sets, respectively.
Hardware Specification Yes The computing machine has a Intel Xeon E5-1660v3 CPU with Nvidia Titan X Pascal GPUs.
Software Dependencies No Our methods are implemented with Tensor Flow on top of the official AVOD code. The paper mentions 'Tensor Flow' but does not specify its version number or any other software dependencies with versions.
Experiment Setup Yes We follow the original training setups of AVOD, e.g., 120k iterations using an ADAM optimizer with an initial learning rate of 0.0001.