Geometry-Guided Domain Generalization for Monocular 3D Object Detection

Authors: Fan Yang, Hui Chen, Yuwei He, Sicheng Zhao, Chenghao Zhang, Kai Ni, Guiguang Ding

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple autonomous driving benchmarks demonstrate that our method achieves state-of-the-art performance in domain generalization for M3OD.
Researcher Affiliation Collaboration 1Tsinghua University 2BNRist 3Hangzhou Zhuoxi Institute of Brain and Intelligence 4Holo Matic Technology
Pseudocode No The paper describes its methods through textual descriptions and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes https://Mono GDG.github.io/
Open Datasets Yes Following (Li et al. 2022b,a), we subsample 1/4 data for nu Scenes (Caesar et al. 2020), Lyft (Kesten et al. 2019), and Pre SIL (Hurl, Czarnecki, and Waslander 2019) datasets...
Dataset Splits No The paper mentions subsampling data and dividing datasets into source domains, but it does not specify explicit training, validation, and test dataset splits with percentages, sample counts, or references to predefined splits for reproducibility.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running its experiments.
Software Dependencies No The paper mentions using FCOS3D as the detector and an SGD optimizer but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes We employ cross-entropy loss for classification and Smooth L1Loss for regression task, with SGD optimizer and learning rate 0.001 (Ruder 2016).