Far3D: Expanding the Horizon for Surround-View 3D Object Detection

Authors: Xiaohui Jiang, Shuailin Li, Yingfei Liu, Shihao Wang, Fan Jia, Tiancai Wang, Lijin Han, Xiangyu Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Significantly, Far3D demonstrates So TA performance on the challenging Argoverse 2 dataset, covering a wide range of 150 meters, surpassing several Li DAR-based approaches. The code is available at https://github.com/megvii-research/Far3D.
Researcher Affiliation Collaboration 1Beijing Institute of Technology 2MEGVII Technology
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/megvii-research/Far3D.
Open Datasets Yes We use the large-scale Argoverse 2 dataset (Wilson et al. 2023) and nu Scenes dataset (Caesar et al. 2020) to explore and evaluate the effectiveness of our approach.
Dataset Splits Yes Argoverse 2 is a dataset for perception and prediction studies in autonomous driving domain. It contains 1000 scenes with 15 seconds duration and 10Hz annotation frequency. And these total scenes are divided into 700 for training, 150 for validation, and 150 for testing.
Hardware Specification No The paper mentions backbone architectures (Vo VNet99, Vi T-L, Res Net101) but does not specify the actual hardware (e.g., GPU model, CPU type) used for experiments.
Software Dependencies No The paper mentions various methods and models (YOLOX, FCOS3D, Stream PETR) and optimizers (Adam W) but does not provide specific software versions (e.g., Python 3.x, PyTorch x.x) for reproducibility.
Experiment Setup Yes We use Adam W (Loshchilov and Hutter 2017) optimizer with a weight decay of 0.01. The total batch size is 8 and the learning rate is set to 2e-4. The models are totally trained for 6 epochs, following the previous method (Chen et al. 2023).