Fusion-Vital: Video-RF Fusion Transformer for Advanced Remote Physiological Measurement

Authors: Jae-Ho Choi, Ki-Bong Kang, Kyung-Tae Kim

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also perform comprehensive experiments based on a newly collected and released remote vital dataset comprising synchronized video-RF sensors, showing the superiority of the fusion approach over the previous single-sensor baselines in various aspects.
Researcher Affiliation Collaboration Jae-Ho Choi1, Ki-Bong Kang2, Kyung-Tae Kim3 1Stanford University, CA, USA 2Samsung Electronics, South Korea 3Pohang University of Science and Technology, South Korea
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states that a newly created dataset will be publicly available, but it does not provide an explicit statement or link for the open-source code of the methodology described in the paper.
Open Datasets Yes To evaluate the effectiveness of the proposed model, we performed extensive experiments on two datasets: the publicly available RRM-static dataset (Choi, Kang, and Kim 2022) and our newly collected physiological dataset, named the Multimodal Database for r PPG (MMDr PPG).
Dataset Splits Yes To ensure subject-independent crossvalidation, the datasets were divided into person-wise subfolds. More details regarding the evaluation protocols are available in the supplementary material.
Hardware Specification No The paper mentions that processing occurred "under workstation settings" but does not provide specific details such as GPU models, CPU models, or memory specifications used for the experiments.
Software Dependencies No The paper mentions the use of the ADAM optimizer but does not specify version numbers for any software components, libraries, or programming languages.
Experiment Setup Yes The proposed Fusion-Vital model was trained using the ADAM optimizer with a batch size of 64 and a learning rate of 0.0001. The inputs for the RGB branch consisted of video clips that were center-cropped and resized to 36 × 36 pixels... As for the RF branch, the received complex radio signals were transformed to the RJTF format using a STFT with a Hann window 300 ms long, hop size of 60 ms, and 256-point FFT. The resulting images were then resized to fit the temporal dimension of the RGB inputs. To ensure a fair comparison, all temporal models were configured with a window size of 10 frames.