Learning Spatio-Temporal Features With Partial Expression Sequences for On-the-Fly Prediction
Authors: Wissam Baddar, Yong Ro
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results showed that the proposed method achieved higher recognition rates compared to the state-of-the-art methods on both datasets. More importantly, the results verified that the proposed method improved the prediction frames with partial expression sequence inputs. |
| Researcher Affiliation | Academia | Wissam J. Baddar, Yong Man Ro Image and Video Systems Lab., Electrical Engineering, KAIST, South Korea {wisam.baddar,ymro}@kaist.ac.kr |
| Pseudocode | Yes | Algorithm 1: Pseudo code for training the LSTM with the proposed objective terms |
| Open Source Code | No | The paper states: 'The learning and implementation of the CNN and LSTM network (shown in Table 1) was done using Tensor Flow.' It does not provide any link or explicit statement about releasing its own source code. |
| Open Datasets | Yes | The construction of the utilized MMI and Oulu-CASIA datasets was performed as follows: 1. MMI dataset (Pantic et al. 2005): ... 2. Oulu-CASIA dataset (Zhao et al. 2011): ... |
| Dataset Splits | Yes | In particular, experiments on the MMI dataset set were performed with a leave-one-subject-out (LOSO) cross validation (Lee, Baddar, and Ro 2016; Liu et al. 2014; Kim et al. 2017; Lee et al. 2014), while 10-fold cross validation (Jung et al. 2015) was used for the experiments conducted on the Oulu CASIA dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. It only mentions 'The learning and implementation of the CNN and LSTM network (shown in Table 1) was done using Tensor Flow.' |
| Software Dependencies | No | The paper states 'The learning and implementation of the CNN and LSTM network (shown in Table 1) was done using Tensor Flow.' It mentions TensorFlow but does not provide a specific version number or other software dependencies with versions. |
| Experiment Setup | Yes | The learning and implementation of the CNN and LSTM network (shown in Table 1) was done using Tensor Flow. For the activation function, rectified linear unit (Re LU) was used in all layers except the layer FCNN, in which sigmoid activation was utilized in order to bound the LSTM input features and insure the LSTM learning stability. In this paper, CNN initial learning rate was set to 0.0001 for both the MMI and the Oulu-CASIA datasets, and the training was performed for 30 epochs. For the LSTM, the learning rate was set to 0.0001, and the learning rate was reduced by a factor of 10 every 10 epochs. The LSTM training was conducted for 50 epochs. |