Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PromptDet: A Lightweight 3D Object Detection Framework with LiDAR Prompts

Authors: Kun Guo, Qiang Ling

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on nu Scenes validate the effectiveness of the proposed Prompt Det.
Researcher Affiliation Academia Kun Guo, Qiang Ling* Dept. of Automation, University of Science and Technology of China, Hefei 230027, China EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual descriptions and block diagrams (Figure 2, Figure 3, Figure 4) but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/lihuashengmax/Prompt Det
Open Datasets Yes We evaluate our framework on the nu Scenes dataset (Caesar et al. 2020), one of the most challenging benchmarks in autonomous driving.
Dataset Splits Yes The dataset consists of 1,000 driving scenarios, divided into 700 for training, 150 for validation, and 150 for testing.
Hardware Specification Yes All experiments are conducted in Py Torch using 4 NVIDIA A40 GPUs (45GB memory)
Software Dependencies No All experiments are conducted in Py Torch using 4 NVIDIA A40 GPUs (45GB memory), based on the MMDetection3D (Contributors 2020) codebase. The paper mentions "Py Torch" and "MMDetection3D" but does not provide specific version numbers for these key software components, which is required for a reproducible description.
Experiment Setup Yes We use Res Net-50 (He et al. 2016) as the image backbone... Adam W (Loshchilov and Hutter 2017) is used as the optimizer with a step-scheduled learning rate. When compared with other methods, we train the model for 30 epochs with CBGS (Zhu et al. 2019); other experiments are trained for 36 epochs without CBGS.