Exploiting Polarized Material Cues for Robust Car Detection
Authors: Wen Dong, Haiyang Mei, Ziqi Wei, Ao Jin, Sen Qiu, Qiang Zhang, Xin Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively validate our method and demonstrate that it outperforms state-of-the-art detection methods. Experimental results show that polarization is a powerful cue for car detection. |
| Researcher Affiliation | Academia | 1Key Laboratory of Social Computing and Cognitive Intelligence, Dalian University of Technology 2Show Lab, National University of Singapore 3Institute of Automation, Chinese Academy of Sciences 4State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Our code is available at https://github.com/wind1117/AAAI24-PCDNet. |
| Open Datasets | No | We construct the first pixel-aligned RGB-polarization car detection dataset called RGBP-Car with trichromatic polarization measurements. (No explicit link or statement about public availability for this dataset is provided in the paper text itself, beyond the general code link which may or may not include the dataset). |
| Dataset Splits | No | Table 1 lists 'Images Train / Test' as 1611 / 990 and 'Cars Train / Test' as 19582 / 11652 for RGBP-Car, but no explicit validation split details are provided. |
| Hardware Specification | Yes | We implement our PCDNet in Py Torch (Paszke et al. 2019) and train it for 300 epochs with the batch size of 32 on two NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | We implement our PCDNet in Py Torch (Paszke et al. 2019)... (PyTorch is mentioned, but a specific version number is not provided). |
| Experiment Setup | Yes | We implement our PCDNet in Py Torch (Paszke et al. 2019) and train it for 300 epochs with the batch size of 32 on two NVIDIA Ge Force RTX 3090 GPUs. We use stochastic gradient descent (SGD) (Amari 1993) with a momentum of 0.937 and a weight decay of 5 10 4 during training. The initial learning rate is set to 0.01 and decayed to 0.001 using a cosine annealing schedule. |