Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
WiFi CSI Based Temporal Activity Detection via Dual Pyramid Network
Authors: Zhendong Liu, Le Zhang, Bing Li, Yingjie Zhou, Zhenghua Chen, Ce Zhu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show our method outperforms challenging baselines. We collected CSI samples to create the dataset used in our experiments. We evaluate our model s performance using Mean Average Precision (m AP) at several temporal Intersection over Union (t Io U) thresholds. We report our main results in Table 3, which demonstrate that our model outperforms all baselines and achieves stateof-the-art performance on the dataset. Ablation Study To further verify the efficacy of our contributions, we conduct extensive ablation studies on Dataset for our method in Table 4. |
| Researcher Affiliation | Academia | 1School of Information and Communication Engineering, University of Electronic Science and Technology of China 2College of Computer Science, Sichuan University 3Institude for Infocomm Research, Agency for Science, Technology and Research (ASTAR), Singapore EMAIL, EMAIL, EMAIL, chen EMAIL |
| Pseudocode | No | The paper describes the methodology using narrative text and mathematical formulas but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/AVC2-UESTC/Wi Fi TAD |
| Open Datasets | No | We also collected and annotated a comprehensive untrimmed Wi Fi CSI dataset covering seven daily activities: walk, run, jump, wave, fall, sit, and stand. This dataset includes 553 untrimmed samples with 2,114 activity instances, each annotated with start time, end time, and category. We collected CSI samples to create the dataset used in our experiments. |
| Dataset Splits | Yes | The whole dataset is split with a 7:3 ratio as the training and testing subsets. |
| Hardware Specification | Yes | Training was conducted on a workstation equipped with an Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz and two Nvidia 3090 GPUs, with a batch size of 2. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam' but does not specify version numbers for these software components. A reproducible description requires specific version numbers. |
| Experiment Setup | Yes | The model is implemented in Py Torch, using Adam as the optimizer with an initial learning rate of 4e-5 and a weight decay coefficient of 1e-3. Training was conducted on a workstation equipped with an Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz and two Nvidia 3090 GPUs, with a batch size of 2. Training our dataset required 40 epochs, taking approximately 4 hours to complete. During the inference stage, the model outputs were processed by Soft-NMS, with a sigma value of 0.95 and a confidence threshold of 0.01. For both training and inference, single signal samples were divided into clips, each with a length of 4096 time stamps (approximately 41 seconds, covering over 2 activities), with a stride of 0.5. We utilized 8 TSSE and LSRE backbones as feature encoders, and the output features from the last 4 layers were used for detection. Regarding the hyperparameters, the coefficient λ in the objective function was set to 10, the scale τ in Contra Norm was set to 0.1, and the confidence threshold in focal loss β was set to 0.9. |