RADIANT: Radar-Image Association Network for 3D Object Detection
Authors: Yunfei Long, Abhinav Kumar, Daniel Morris, Xiaoming Liu, Marcos Castro, Punarjay Chakravarty
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show significant improvement in mean average precision and translation error on the nu Scenes dataset over monocular counterparts. |
| Researcher Affiliation | Collaboration | 1 Michigan State University 2 Ford Motor Company {longyunf, kumarab6, dmorris, liuxm}@msu.edu, {mgerard8, pchakra5}@ford.com |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https://github.com/longyunf/radiant. |
| Open Datasets | Yes | We apply the proposed method on the detection task of nu Scenes dataset (Caesar et al. 2020), a widely used dataset with both image and radar points collected in urban driving environment. |
| Dataset Splits | Yes | The nu Scenes detection dataset consists of 28,130 training samples, 6,019 validation samples and 6,008 test samples. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies, such as programming languages or libraries. |
| Experiment Setup | No | The paper mentions using FCOS3D and ResNet-18, and describes architectural choices like freezing the image branch, but it does not provide specific hyperparameters (e.g., learning rate, batch size, epochs) or detailed training configurations in the main text. It refers to supplementary material for 'Details of the input vector' for the DWN, but not for general experiment setup. |