Non-contact Pain Recognition from Video Sequences with Remote Physiological Measurements Prediction

Authors: Ruijing Yang, Ziyu Guan, Zitong Yu, Xiaoyi Feng, Jinye Peng, Guoying Zhao

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments and Results
Researcher Affiliation Academia 1Northwest University 2University of Oulu 3Northwestern Polytechnical University
Pseudocode No The paper describes the architecture and modules (r STAN, STA, VFE, Deep-r PPG) but does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology or a link to a code repository.
Open Datasets Yes We evaluate the performance of the proposed method on: the Bio Vid Heat Pain Database (the Bio Vid for short) [Walter et al., 2013] and the UNBC-Mc Master Shoulder Pain Expression Archive Database (the UNBC for short) [Lucey et al., 2011].
Dataset Splits Yes For the parameter selection, we randomly split all the 87 subjects into 5 folds and use 5-fold cross-validation to determine the best parameters. While for the comparisons to the state of the arts, we follow the leave-one-subject-out protocol (LOO) as previous works [Werner et al., 2014; Werner et al., 2016].
Hardware Specification Yes The proposed method is trained on Nvidia P100 using Py Torch.
Software Dependencies No The paper mentions 'Py Torch' and 'MTCNN' as software used, but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Adam is used as the optimizer with a learning rate of 2e 4 which is decayed after 10 epochs with a multiplicative factor gamma=0.8. To learn the parameters of the two branches more efficiently, we train them separately and fine-tune the whole framework jointly. For the input videos, the original video is downsampled to L = 64 since it produces the best performance.