Deception Detection in Videos
Authors: Zhe Wu, Bharat Singh, Larry Davis, V. Subrahmanian
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on 104 court room trial videos demonstrate the effectiveness and the complementary nature of our low-level and high-level features. |
| Researcher Affiliation | Academia | Zhe Wu,1 Bharat Singh,1 Larry S. Davis,1 V. S. Subrahmanian2 1University of Maryland 2Dartmouth College {zhewu,bharat,lsd}@umiacs.umd.edu vs@dartmouth.edu |
| Pseudocode | No | The paper describes its methods using prose and mathematical equations, and includes block diagrams (e.g., Figure 2) but does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include links to a code repository. |
| Open Datasets | Yes | We evaluate our automated deception detection approach on a real-life deception detection database (P erez-Rosas et al. 2015). |
| Dataset Splits | Yes | We perform 10-fold cross validation using identities instead of video samples for all the following experiments, i.e. no identity in the test set belongs to the training set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions software components like 'Lib SVM' and 'Glove', but does not provide specific version numbers for these or any other ancillary software dependencies. |
| Experiment Setup | Yes | We divide each video in the database into short fixed-duration video clips... the duration of vj i is a constant (4 seconds in our implementation). We sample frames for each video clip using a frame rate of 15 fps. The micro-expression detectors are trained using a linear kernel SVM using Lib SVM. We use a polynomial kernel for Kernel SVM because it performs best. For Naive Bayes classifier, we use normal distributions... For logistic regression, we use Binomial distribution. In Random Forest, the number of trees is 50. In Adaboost, we use decision trees as the weak learners. |