Learning-Based Tracking-before-Detect for RF-Based Unconstrained Indoor Human Tracking

Authors: Zhi Wu, Dongheng Zhang, Zixin Shang, Yuqin Yuan, Hanqin Gong, Binquan Wang, Zhi Lu, Yadong Li, Yang Hu, Qibin Sun, Yan Chen

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate Neural TBD, we collect an RF-based tracking dataset in unconstrained scenarios, which encompasses 4 million annotated radar frames with up to 19 individuals acting in 6 different scenarios. Neural TBD realizes a 70% improvement in performance compared to conventional TBD methods. We first preform group-wise shuffle on RF-UNIT and divide data into train, validation, and test subset, following 8:1:1 ratio. All evaluations are reported on test sets.
Researcher Affiliation Academia 1 School of Cyber Science and Technology, University of Science and Technology of China 2 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China wzwyyx@mail.ustc.edu.cn, dongheng@ustc.edu.cn, {zxshang, yuanyuqin, hanqin gong}@mail.ustc.edu.cn, {wbq0556, zhilu}@ustc.edu.cn, yadongli@mail.ustc.edu.cn, {eeyhu, qibinsun, eecyan}@ustc.edu.cn
Pseudocode No The paper describes the architecture and components of Neural TBD, including mathematical equations, but it does not include a formal pseudocode block or algorithm listing.
Open Source Code Yes The code and dataset will be released. The dataset and code will be public.
Open Datasets Yes To evaluate Neural TBD, we collect an RF-based tracking dataset in unconstrained scenarios, which encompasses 4 million annotated radar frames with up to 19 individuals acting in 6 different scenarios. We present the RF-UNIT dataset, which encompasses million-level radar heatmaps of at most 19 individuals in multiple different scenarios. The dataset and code will be public.
Dataset Splits Yes We first preform group-wise shuffle on RF-UNIT and divide data into train, validation, and test subset, following 8:1:1 ratio.
Hardware Specification Yes All experiments are conducted on a single NVIDIA A100 GPU with a batch size of 16.
Software Dependencies No The paper mentions using the Adam optimizer, but it does not specify software dependencies with version numbers for libraries or frameworks used (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes We employ the Adam optimizer with initial learning rate of 1.0 10 2 and weight decay of 0.05. During the training process, we adopt a step-based learning rate decay strategy. All experiments are conducted on a single NVIDIA A100 GPU with a batch size of 16.