TARSS-Net: Temporal-Aware Radar Semantic Segmentation Network
Authors: Youcheng Zhang, Liwen Zhang, ZijunHu , Pengcheng Pi, Teng Li, Yuanpei Chen, Shi Peng, Zhe Ma
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments. 4.1 Datasets and training setup. 4.2 Comparisons with State-of-The-Art Methods. 4.3 Ablation Study. Model performance verification with real-measured data in different detection scenarios. To verify the scope of application of TARSS-Net, we conduct quantitative experiments on different real-measured large scale radar datasets including CARRADA [22] which is collected from a low cost FMCW ( 77GHz) on-board radar in driving scenario and self-collected dataset, Ku RALS, recorded from a Kurz-under (Ku) band ( 17GHz) radar for UAV surveillance and sea monitoring. Experimental results show TARSS-Net can achieve state-of-the-art (So TA) performance ( 4). |
| Researcher Affiliation | Collaboration | Youcheng Zhang1 Liwen Zhang1 Zijun Hu1 Pengcheng Pi1 Teng Li2 Yuanpei Chen1 Shi Peng1 Zhe Ma1 1Intelligent Science and Technology Academy of CASIC 2Shenzhen International Graduate School, Tsinghua University |
| Pseudocode | No | The paper describes its methods using text and mathematical equations, and provides architectural diagrams (Fig. 2, 3, 4, S1, S2, S3, S4), but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and supplementary materials are available at https://github.com/zlw9161/TARSS-Net. |
| Open Datasets | Yes | Three datasets have been used to valid the performance of TARSS-Net including: CARRADA [22] which contains multi-view annotated radar recordings (RAD tensors) for 4 categories of objects in driving scenarios under different weathers conditions; CARRADA-RAC [35], which is an improved version of CARRADA with calibrations on RA view; |
| Dataset Splits | Yes | The training, validation and test subsets were split as in [21]. |
| Hardware Specification | Yes | Running platform: The experimental platform of this work is RTX 3090 GPU with 24G memory. |
| Software Dependencies | No | The paper mentions the Adam optimizer and its default settings but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries). |
| Experiment Setup | Yes | Hyper-parameters: All the models were trained with Adam optimizer [15] using the default setting of hyper-parameters, β1 = 0.9, β2 = 0.999 and ϵ = 1e 8. The initial learning rate was 1e 4, which decayed exponentially with the rate of 0.9 every 20 epochs. The training epochs and mini-batch size were 300 and 6, respectively. |