Every View Counts: Cross-View Consistency in 3D Object Detection with Hybrid-Cylindrical-Spherical Voxelization
Authors: Qi Chen, Lin Sun, Ernest Cheung, Alan L. Yuille
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on the challenging Nu Scenes Dataset validate the effectiveness of our proposed method which leverages joint optimization and complementary information on both views. |
| Researcher Affiliation | Collaboration | Qi Chen Johns Hopkins University Baltimore, MD, USA; Lin Sun Samsung Semiconductor, Inc. San Jose, CA, USA; Ernest Cheung Samsung Semiconductor, Inc. San Jose, CA, USA; Alan Yuille Johns Hopkins University Baltimore, MD, USA |
| Pseudocode | No | The paper includes architectural diagrams (Figure 1) and describes processes in text, but it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a statement or link indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | We evaluate our CVCNet on the Nu Scenes 3D detection dataset [9]. The dataset contains 1, 000 scenes, including 700 scenes for training, 150 scenes for validation and 150 scenes for test. ... The experiments are conducted on the Waymo Open Dataset which is a large-scale dataset recently released for benchmarking object detection algorithms at industrial production level and the performance is shown in Table 4. |
| Dataset Splits | Yes | The dataset contains 1, 000 scenes, including 700 scenes for training, 150 scenes for validation and 150 scenes for test. |
| Hardware Specification | Yes | Our algorithm runs at 11 FPS with a single V100 GPU on Waymo Open Dataset. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Network Details r, θ, and φ range is [0.5, 51.1], [ 3.141, 3.141], and [ 1.3, 0.8] and the shell voxel size is (0.1, 0.003, 0.0125). The max number of points per voxel is 8. We set loss weights α = β = γ = ζ = 1. |