Action Recognition with Joints-Pooled 3D Deep Convolutional Descriptors

Authors: Congqi Cao, Yifan Zhang, Chunjie Zhang, Hanqing Lu

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world datasets show that our method generates promising results, outperforming state-of-the-art results significantly.
Researcher Affiliation Academia 1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2School of Computer and Control Engineering, University of Chinese Academy of Sciences
Pseudocode No The paper describes the steps of the framework (e.g., 'The main procedures of our framework are as follows:'), but it does not provide a formal pseudocode block or algorithm section.
Open Source Code No The paper does not contain any statement about making source code publicly available or provide a link to a code repository.
Open Datasets Yes We evaluate our method on three public action datasets: sub JHMDB [Jhuang et al., 2013], Penn Action [Zhang et al., 2013] and Composable Activities [Lillo et al., 2014].
Dataset Splits Yes We use the 3-fold cross validation setting provided by the dataset for experiments. We use the 50/50 trainning/testing split provided by the dataset to do experiments.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions software like C3D, Linear SVM (LIBLINEAR), and HMMs, but it does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes For different splits, we use the same finetuning settings. We do finetuning using mini-batches of 30 clips, with learning rate of 0.0003. The finetuning is stopped after 5000 iterations.